Merge pull request #3558 from freqtrade/bt_add_maxdrawdown
Revise backtesting export format, add some metrics
This commit is contained in:
commit
3d515ed5bf
@ -157,17 +157,32 @@ A backtesting result will look like that:
|
||||
| ADA/BTC | 1 | 0.89 | 0.89 | 0.00004434 | 0.44 | 6:00:00 | 1 | 0 | 0 |
|
||||
| LTC/BTC | 1 | 0.68 | 0.68 | 0.00003421 | 0.34 | 2:00:00 | 1 | 0 | 0 |
|
||||
| TOTAL | 2 | 0.78 | 1.57 | 0.00007855 | 0.78 | 4:00:00 | 2 | 0 | 0 |
|
||||
=============== SUMMARY METRICS ===============
|
||||
| Metric | Value |
|
||||
|-----------------------+---------------------|
|
||||
| Backtesting from | 2019-01-01 00:00:00 |
|
||||
| Backtesting to | 2019-05-01 00:00:00 |
|
||||
| Total trades | 429 |
|
||||
| First trade | 2019-01-01 18:30:00 |
|
||||
| First trade Pair | EOS/USDT |
|
||||
| Total Profit % | 152.41% |
|
||||
| Trades per day | 3.575 |
|
||||
| Best day | 25.27% |
|
||||
| Worst day | -30.67% |
|
||||
| Avg. Duration Winners | 4:23:00 |
|
||||
| Avg. Duration Loser | 6:55:00 |
|
||||
| | |
|
||||
| Max Drawdown | 50.63% |
|
||||
| Drawdown Start | 2019-02-15 14:10:00 |
|
||||
| Drawdown End | 2019-04-11 18:15:00 |
|
||||
| Market change | -5.88% |
|
||||
===============================================
|
||||
```
|
||||
|
||||
### Backtesting report table
|
||||
|
||||
The 1st table contains all trades the bot made, including "left open trades".
|
||||
|
||||
The 2nd table contains a recap of sell reasons.
|
||||
This table can tell you which area needs some additional work (i.e. all `sell_signal` trades are losses, so we should disable the sell-signal or work on improving that).
|
||||
|
||||
The 3rd table contains all trades the bot had to `forcesell` at the end of the backtest period to present a full picture.
|
||||
This is necessary to simulate realistic behaviour, since the backtest period has to end at some point, while realistically, you could leave the bot running forever.
|
||||
These trades are also included in the first table, but are extracted separately for clarity.
|
||||
|
||||
The last line will give you the overall performance of your strategy,
|
||||
here:
|
||||
|
||||
@ -196,6 +211,58 @@ On the other hand, if you set a too high `minimal_roi` like `"0": 0.55`
|
||||
(55%), there is almost no chance that the bot will ever reach this profit.
|
||||
Hence, keep in mind that your performance is an integral mix of all different elements of the strategy, your configuration, and the crypto-currency pairs you have set up.
|
||||
|
||||
### Sell reasons table
|
||||
|
||||
The 2nd table contains a recap of sell reasons.
|
||||
This table can tell you which area needs some additional work (e.g. all or many of the `sell_signal` trades are losses, so you should work on improving the sell signal, or consider disabling it).
|
||||
|
||||
### Left open trades table
|
||||
|
||||
The 3rd table contains all trades the bot had to `forcesell` at the end of the backtesting period to present you the full picture.
|
||||
This is necessary to simulate realistic behavior, since the backtest period has to end at some point, while realistically, you could leave the bot running forever.
|
||||
These trades are also included in the first table, but are also shown separately in this table for clarity.
|
||||
|
||||
### Summary metrics
|
||||
|
||||
The last element of the backtest report is the summary metrics table.
|
||||
It contains some useful key metrics about performance of your strategy on backtesting data.
|
||||
|
||||
```
|
||||
=============== SUMMARY METRICS ===============
|
||||
| Metric | Value |
|
||||
|-----------------------+---------------------|
|
||||
| Backtesting from | 2019-01-01 00:00:00 |
|
||||
| Backtesting to | 2019-05-01 00:00:00 |
|
||||
| Total trades | 429 |
|
||||
| First trade | 2019-01-01 18:30:00 |
|
||||
| First trade Pair | EOS/USDT |
|
||||
| Total Profit % | 152.41% |
|
||||
| Trades per day | 3.575 |
|
||||
| Best day | 25.27% |
|
||||
| Worst day | -30.67% |
|
||||
| Avg. Duration Winners | 4:23:00 |
|
||||
| Avg. Duration Loser | 6:55:00 |
|
||||
| | |
|
||||
| Max Drawdown | 50.63% |
|
||||
| Drawdown Start | 2019-02-15 14:10:00 |
|
||||
| Drawdown End | 2019-04-11 18:15:00 |
|
||||
| Market change | -5.88% |
|
||||
===============================================
|
||||
|
||||
```
|
||||
|
||||
- `Total trades`: Identical to the total trades of the backtest output table.
|
||||
- `First trade`: First trade entered.
|
||||
- `First trade pair`: Which pair was part of the first trade.
|
||||
- `Backtesting from` / `Backtesting to`: Backtesting range (usually defined with the `--timerange` option).
|
||||
- `Total Profit %`: Total profit per stake amount. Aligned to the TOTAL column of the first table.
|
||||
- `Trades per day`: Total trades divided by the backtesting duration in days (this will give you information about how many trades to expect from the strategy).
|
||||
- `Best day` / `Worst day`: Best and worst day based on daily profit.
|
||||
- `Avg. Duration Winners` / `Avg. Duration Loser`: Average durations for winning and losing trades.
|
||||
- `Max Drawdown`: Maximum drawdown experienced. For example, the value of 50% means that from highest to subsequent lowest point, a 50% drop was experienced).
|
||||
- `Drawdown Start` / `Drawdown End`: Start and end datetimes for this largest drawdown (can also be visualized via the `plot-dataframe` sub-command).
|
||||
- `Market change`: Change of the market during the backtest period. Calculated as average of all pairs changes from the first to the last candle using the "close" column.
|
||||
|
||||
### Assumptions made by backtesting
|
||||
|
||||
Since backtesting lacks some detailed information about what happens within a candle, it needs to take a few assumptions:
|
||||
|
@ -224,7 +224,8 @@ Possible options for the `freqtrade plot-profit` subcommand:
|
||||
|
||||
```
|
||||
usage: freqtrade plot-profit [-h] [-v] [--logfile FILE] [-V] [-c PATH]
|
||||
[-d PATH] [--userdir PATH] [-p PAIRS [PAIRS ...]]
|
||||
[-d PATH] [--userdir PATH] [-s NAME]
|
||||
[--strategy-path PATH] [-p PAIRS [PAIRS ...]]
|
||||
[--timerange TIMERANGE] [--export EXPORT]
|
||||
[--export-filename PATH] [--db-url PATH]
|
||||
[--trade-source {DB,file}] [-i TIMEFRAME]
|
||||
@ -270,6 +271,11 @@ Common arguments:
|
||||
--userdir PATH, --user-data-dir PATH
|
||||
Path to userdata directory.
|
||||
|
||||
Strategy arguments:
|
||||
-s NAME, --strategy NAME
|
||||
Specify strategy class name which will be used by the
|
||||
bot.
|
||||
--strategy-path PATH Specify additional strategy lookup path.
|
||||
```
|
||||
|
||||
The `-p/--pairs` argument, can be used to limit the pairs that are considered for this calculation.
|
||||
@ -279,7 +285,7 @@ Examples:
|
||||
Use custom backtest-export file
|
||||
|
||||
``` bash
|
||||
freqtrade plot-profit -p LTC/BTC --export-filename user_data/backtest_results/backtest-result-Strategy005.json
|
||||
freqtrade plot-profit -p LTC/BTC --export-filename user_data/backtest_results/backtest-result.json
|
||||
```
|
||||
|
||||
Use custom database
|
||||
|
@ -85,10 +85,44 @@ Analyze a trades dataframe (also used below for plotting)
|
||||
|
||||
|
||||
```python
|
||||
from freqtrade.data.btanalysis import load_backtest_data
|
||||
from freqtrade.data.btanalysis import load_backtest_data, load_backtest_stats
|
||||
|
||||
# Load backtest results
|
||||
trades = load_backtest_data(config["user_data_dir"] / "backtest_results/backtest-result.json")
|
||||
# if backtest_dir points to a directory, it'll automatically load the last backtest file.
|
||||
backtest_dir = config["user_data_dir"] / "backtest_results"
|
||||
# backtest_dir can also point to a specific file
|
||||
# backtest_dir = config["user_data_dir"] / "backtest_results/backtest-result-2020-07-01_20-04-22.json"
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
# You can get the full backtest statistics by using the following command.
|
||||
# This contains all information used to generate the backtest result.
|
||||
stats = load_backtest_stats(backtest_dir)
|
||||
|
||||
strategy = 'SampleStrategy'
|
||||
# All statistics are available per strategy, so if `--strategy-list` was used during backtest, this will be reflected here as well.
|
||||
# Example usages:
|
||||
print(stats['strategy'][strategy]['results_per_pair'])
|
||||
# Get pairlist used for this backtest
|
||||
print(stats['strategy'][strategy]['pairlist'])
|
||||
# Get market change (average change of all pairs from start to end of the backtest period)
|
||||
print(stats['strategy'][strategy]['market_change'])
|
||||
# Maximum drawdown ()
|
||||
print(stats['strategy'][strategy]['max_drawdown'])
|
||||
# Maximum drawdown start and end
|
||||
print(stats['strategy'][strategy]['drawdown_start'])
|
||||
print(stats['strategy'][strategy]['drawdown_end'])
|
||||
|
||||
|
||||
# Get strategy comparison (only relevant if multiple strategies were compared)
|
||||
print(stats['strategy_comparison'])
|
||||
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
# Load backtested trades as dataframe
|
||||
trades = load_backtest_data(backtest_dir)
|
||||
|
||||
# Show value-counts per pair
|
||||
trades.groupby("pair")["sell_reason"].value_counts()
|
||||
|
@ -366,7 +366,7 @@ class Arguments:
|
||||
plot_profit_cmd = subparsers.add_parser(
|
||||
'plot-profit',
|
||||
help='Generate plot showing profits.',
|
||||
parents=[_common_parser],
|
||||
parents=[_common_parser, _strategy_parser],
|
||||
)
|
||||
plot_profit_cmd.set_defaults(func=start_plot_profit)
|
||||
self._build_args(optionlist=ARGS_PLOT_PROFIT, parser=plot_profit_cmd)
|
||||
|
@ -199,7 +199,7 @@ class Configuration:
|
||||
config['exportfilename'] = Path(config['exportfilename'])
|
||||
else:
|
||||
config['exportfilename'] = (config['user_data_dir']
|
||||
/ 'backtest_results/backtest-result.json')
|
||||
/ 'backtest_results')
|
||||
|
||||
def _process_optimize_options(self, config: Dict[str, Any]) -> None:
|
||||
|
||||
|
@ -26,12 +26,15 @@ AVAILABLE_PAIRLISTS = ['StaticPairList', 'VolumePairList',
|
||||
'ShuffleFilter', 'SpreadFilter']
|
||||
AVAILABLE_DATAHANDLERS = ['json', 'jsongz']
|
||||
DRY_RUN_WALLET = 1000
|
||||
DATETIME_PRINT_FORMAT = '%Y-%m-%d %H:%M:%S'
|
||||
MATH_CLOSE_PREC = 1e-14 # Precision used for float comparisons
|
||||
DEFAULT_DATAFRAME_COLUMNS = ['date', 'open', 'high', 'low', 'close', 'volume']
|
||||
# Don't modify sequence of DEFAULT_TRADES_COLUMNS
|
||||
# it has wide consequences for stored trades files
|
||||
DEFAULT_TRADES_COLUMNS = ['timestamp', 'id', 'type', 'side', 'price', 'amount', 'cost']
|
||||
|
||||
LAST_BT_RESULT_FN = '.last_result.json'
|
||||
|
||||
USERPATH_HYPEROPTS = 'hyperopts'
|
||||
USERPATH_STRATEGIES = 'strategies'
|
||||
USERPATH_NOTEBOOKS = 'notebooks'
|
||||
|
@ -3,52 +3,123 @@ Helpers when analyzing backtest data
|
||||
"""
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Dict, Union, Tuple
|
||||
from typing import Dict, Union, Tuple, Any, Optional
|
||||
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
from datetime import timezone
|
||||
|
||||
from freqtrade import persistence
|
||||
from freqtrade.constants import LAST_BT_RESULT_FN
|
||||
from freqtrade.misc import json_load
|
||||
from freqtrade.persistence import Trade
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# must align with columns in backtest.py
|
||||
BT_DATA_COLUMNS = ["pair", "profit_percent", "open_time", "close_time", "index", "duration",
|
||||
BT_DATA_COLUMNS = ["pair", "profit_percent", "open_date", "close_date", "index", "trade_duration",
|
||||
"open_rate", "close_rate", "open_at_end", "sell_reason"]
|
||||
|
||||
|
||||
def load_backtest_data(filename: Union[Path, str]) -> pd.DataFrame:
|
||||
def get_latest_backtest_filename(directory: Union[Path, str]) -> str:
|
||||
"""
|
||||
Load backtest data file.
|
||||
:param filename: pathlib.Path object, or string pointing to the file.
|
||||
:return: a dataframe with the analysis results
|
||||
Get latest backtest export based on '.last_result.json'.
|
||||
:param directory: Directory to search for last result
|
||||
:return: string containing the filename of the latest backtest result
|
||||
:raises: ValueError in the following cases:
|
||||
* Directory does not exist
|
||||
* `directory/.last_result.json` does not exist
|
||||
* `directory/.last_result.json` has the wrong content
|
||||
"""
|
||||
if isinstance(filename, str):
|
||||
filename = Path(filename)
|
||||
if isinstance(directory, str):
|
||||
directory = Path(directory)
|
||||
if not directory.is_dir():
|
||||
raise ValueError(f"Directory '{directory}' does not exist.")
|
||||
filename = directory / LAST_BT_RESULT_FN
|
||||
|
||||
if not filename.is_file():
|
||||
raise ValueError(f"File {filename} does not exist.")
|
||||
raise ValueError(
|
||||
f"Directory '{directory}' does not seem to contain backtest statistics yet.")
|
||||
|
||||
with filename.open() as file:
|
||||
data = json_load(file)
|
||||
|
||||
df = pd.DataFrame(data, columns=BT_DATA_COLUMNS)
|
||||
if 'latest_backtest' not in data:
|
||||
raise ValueError(f"Invalid '{LAST_BT_RESULT_FN}' format.")
|
||||
|
||||
df['open_time'] = pd.to_datetime(df['open_time'],
|
||||
unit='s',
|
||||
utc=True,
|
||||
infer_datetime_format=True
|
||||
)
|
||||
df['close_time'] = pd.to_datetime(df['close_time'],
|
||||
unit='s',
|
||||
utc=True,
|
||||
infer_datetime_format=True
|
||||
)
|
||||
df['profit'] = df['close_rate'] - df['open_rate']
|
||||
df = df.sort_values("open_time").reset_index(drop=True)
|
||||
return data['latest_backtest']
|
||||
|
||||
|
||||
def load_backtest_stats(filename: Union[Path, str]) -> Dict[str, Any]:
|
||||
"""
|
||||
Load backtest statistics file.
|
||||
:param filename: pathlib.Path object, or string pointing to the file.
|
||||
:return: a dictionary containing the resulting file.
|
||||
"""
|
||||
if isinstance(filename, str):
|
||||
filename = Path(filename)
|
||||
if filename.is_dir():
|
||||
filename = filename / get_latest_backtest_filename(filename)
|
||||
if not filename.is_file():
|
||||
raise ValueError(f"File {filename} does not exist.")
|
||||
logger.info(f"Loading backtest result from {filename}")
|
||||
with filename.open() as file:
|
||||
data = json_load(file)
|
||||
|
||||
return data
|
||||
|
||||
|
||||
def load_backtest_data(filename: Union[Path, str], strategy: Optional[str] = None) -> pd.DataFrame:
|
||||
"""
|
||||
Load backtest data file.
|
||||
:param filename: pathlib.Path object, or string pointing to a file or directory
|
||||
:param strategy: Strategy to load - mainly relevant for multi-strategy backtests
|
||||
Can also serve as protection to load the correct result.
|
||||
:return: a dataframe with the analysis results
|
||||
:raise: ValueError if loading goes wrong.
|
||||
"""
|
||||
data = load_backtest_stats(filename)
|
||||
if not isinstance(data, list):
|
||||
# new, nested format
|
||||
if 'strategy' not in data:
|
||||
raise ValueError("Unknown dataformat.")
|
||||
|
||||
if not strategy:
|
||||
if len(data['strategy']) == 1:
|
||||
strategy = list(data['strategy'].keys())[0]
|
||||
else:
|
||||
raise ValueError("Detected backtest result with more than one strategy. "
|
||||
"Please specify a strategy.")
|
||||
|
||||
if strategy not in data['strategy']:
|
||||
raise ValueError(f"Strategy {strategy} not available in the backtest result.")
|
||||
|
||||
data = data['strategy'][strategy]['trades']
|
||||
df = pd.DataFrame(data)
|
||||
df['open_date'] = pd.to_datetime(df['open_date'],
|
||||
utc=True,
|
||||
infer_datetime_format=True
|
||||
)
|
||||
df['close_date'] = pd.to_datetime(df['close_date'],
|
||||
utc=True,
|
||||
infer_datetime_format=True
|
||||
)
|
||||
else:
|
||||
# old format - only with lists.
|
||||
df = pd.DataFrame(data, columns=BT_DATA_COLUMNS)
|
||||
|
||||
df['open_date'] = pd.to_datetime(df['open_date'],
|
||||
unit='s',
|
||||
utc=True,
|
||||
infer_datetime_format=True
|
||||
)
|
||||
df['close_date'] = pd.to_datetime(df['close_date'],
|
||||
unit='s',
|
||||
utc=True,
|
||||
infer_datetime_format=True
|
||||
)
|
||||
df['profit_abs'] = df['close_rate'] - df['open_rate']
|
||||
df = df.sort_values("open_date").reset_index(drop=True)
|
||||
return df
|
||||
|
||||
|
||||
@ -62,9 +133,9 @@ def analyze_trade_parallelism(results: pd.DataFrame, timeframe: str) -> pd.DataF
|
||||
"""
|
||||
from freqtrade.exchange import timeframe_to_minutes
|
||||
timeframe_min = timeframe_to_minutes(timeframe)
|
||||
dates = [pd.Series(pd.date_range(row[1].open_time, row[1].close_time,
|
||||
dates = [pd.Series(pd.date_range(row[1]['open_date'], row[1]['close_date'],
|
||||
freq=f"{timeframe_min}min"))
|
||||
for row in results[['open_time', 'close_time']].iterrows()]
|
||||
for row in results[['open_date', 'close_date']].iterrows()]
|
||||
deltas = [len(x) for x in dates]
|
||||
dates = pd.Series(pd.concat(dates).values, name='date')
|
||||
df2 = pd.DataFrame(np.repeat(results.values, deltas, axis=0), columns=results.columns)
|
||||
@ -90,21 +161,26 @@ def evaluate_result_multi(results: pd.DataFrame, timeframe: str,
|
||||
return df_final[df_final['open_trades'] > max_open_trades]
|
||||
|
||||
|
||||
def load_trades_from_db(db_url: str) -> pd.DataFrame:
|
||||
def load_trades_from_db(db_url: str, strategy: Optional[str] = None) -> pd.DataFrame:
|
||||
"""
|
||||
Load trades from a DB (using dburl)
|
||||
:param db_url: Sqlite url (default format sqlite:///tradesv3.dry-run.sqlite)
|
||||
:param strategy: Strategy to load - mainly relevant for multi-strategy backtests
|
||||
Can also serve as protection to load the correct result.
|
||||
:return: Dataframe containing Trades
|
||||
"""
|
||||
trades: pd.DataFrame = pd.DataFrame([], columns=BT_DATA_COLUMNS)
|
||||
persistence.init(db_url, clean_open_orders=False)
|
||||
|
||||
columns = ["pair", "open_time", "close_time", "profit", "profit_percent",
|
||||
"open_rate", "close_rate", "amount", "duration", "sell_reason",
|
||||
columns = ["pair", "open_date", "close_date", "profit", "profit_percent",
|
||||
"open_rate", "close_rate", "amount", "trade_duration", "sell_reason",
|
||||
"fee_open", "fee_close", "open_rate_requested", "close_rate_requested",
|
||||
"stake_amount", "max_rate", "min_rate", "id", "exchange",
|
||||
"stop_loss", "initial_stop_loss", "strategy", "timeframe"]
|
||||
|
||||
filters = []
|
||||
if strategy:
|
||||
filters.append(Trade.strategy == strategy)
|
||||
|
||||
trades = pd.DataFrame([(t.pair,
|
||||
t.open_date.replace(tzinfo=timezone.utc),
|
||||
t.close_date.replace(tzinfo=timezone.utc) if t.close_date else None,
|
||||
@ -123,14 +199,14 @@ def load_trades_from_db(db_url: str) -> pd.DataFrame:
|
||||
t.stop_loss, t.initial_stop_loss,
|
||||
t.strategy, t.timeframe
|
||||
)
|
||||
for t in Trade.get_trades().all()],
|
||||
for t in Trade.get_trades(filters).all()],
|
||||
columns=columns)
|
||||
|
||||
return trades
|
||||
|
||||
|
||||
def load_trades(source: str, db_url: str, exportfilename: Path,
|
||||
no_trades: bool = False) -> pd.DataFrame:
|
||||
no_trades: bool = False, strategy: Optional[str] = None) -> pd.DataFrame:
|
||||
"""
|
||||
Based on configuration option "trade_source":
|
||||
* loads data from DB (using `db_url`)
|
||||
@ -148,7 +224,7 @@ def load_trades(source: str, db_url: str, exportfilename: Path,
|
||||
if source == "DB":
|
||||
return load_trades_from_db(db_url)
|
||||
elif source == "file":
|
||||
return load_backtest_data(exportfilename)
|
||||
return load_backtest_data(exportfilename, strategy)
|
||||
|
||||
|
||||
def extract_trades_of_period(dataframe: pd.DataFrame, trades: pd.DataFrame,
|
||||
@ -163,11 +239,31 @@ def extract_trades_of_period(dataframe: pd.DataFrame, trades: pd.DataFrame,
|
||||
else:
|
||||
trades_start = dataframe.iloc[0]['date']
|
||||
trades_stop = dataframe.iloc[-1]['date']
|
||||
trades = trades.loc[(trades['open_time'] >= trades_start) &
|
||||
(trades['close_time'] <= trades_stop)]
|
||||
trades = trades.loc[(trades['open_date'] >= trades_start) &
|
||||
(trades['close_date'] <= trades_stop)]
|
||||
return trades
|
||||
|
||||
|
||||
def calculate_market_change(data: Dict[str, pd.DataFrame], column: str = "close") -> float:
|
||||
"""
|
||||
Calculate market change based on "column".
|
||||
Calculation is done by taking the first non-null and the last non-null element of each column
|
||||
and calculating the pctchange as "(last - first) / first".
|
||||
Then the results per pair are combined as mean.
|
||||
|
||||
:param data: Dict of Dataframes, dict key should be pair.
|
||||
:param column: Column in the original dataframes to use
|
||||
:return:
|
||||
"""
|
||||
tmp_means = []
|
||||
for pair, df in data.items():
|
||||
start = df[column].dropna().iloc[0]
|
||||
end = df[column].dropna().iloc[-1]
|
||||
tmp_means.append((end - start) / start)
|
||||
|
||||
return np.mean(tmp_means)
|
||||
|
||||
|
||||
def combine_dataframes_with_mean(data: Dict[str, pd.DataFrame],
|
||||
column: str = "close") -> pd.DataFrame:
|
||||
"""
|
||||
@ -190,7 +286,7 @@ def create_cum_profit(df: pd.DataFrame, trades: pd.DataFrame, col_name: str,
|
||||
"""
|
||||
Adds a column `col_name` with the cumulative profit for the given trades array.
|
||||
:param df: DataFrame with date index
|
||||
:param trades: DataFrame containing trades (requires columns close_time and profit_percent)
|
||||
:param trades: DataFrame containing trades (requires columns close_date and profit_percent)
|
||||
:param col_name: Column name that will be assigned the results
|
||||
:param timeframe: Timeframe used during the operations
|
||||
:return: Returns df with one additional column, col_name, containing the cumulative profit.
|
||||
@ -201,7 +297,7 @@ def create_cum_profit(df: pd.DataFrame, trades: pd.DataFrame, col_name: str,
|
||||
from freqtrade.exchange import timeframe_to_minutes
|
||||
timeframe_minutes = timeframe_to_minutes(timeframe)
|
||||
# Resample to timeframe to make sure trades match candles
|
||||
_trades_sum = trades.resample(f'{timeframe_minutes}min', on='close_time'
|
||||
_trades_sum = trades.resample(f'{timeframe_minutes}min', on='close_date'
|
||||
)[['profit_percent']].sum()
|
||||
df.loc[:, col_name] = _trades_sum.cumsum()
|
||||
# Set first value to 0
|
||||
@ -211,13 +307,13 @@ def create_cum_profit(df: pd.DataFrame, trades: pd.DataFrame, col_name: str,
|
||||
return df
|
||||
|
||||
|
||||
def calculate_max_drawdown(trades: pd.DataFrame, *, date_col: str = 'close_time',
|
||||
def calculate_max_drawdown(trades: pd.DataFrame, *, date_col: str = 'close_date',
|
||||
value_col: str = 'profit_percent'
|
||||
) -> Tuple[float, pd.Timestamp, pd.Timestamp]:
|
||||
"""
|
||||
Calculate max drawdown and the corresponding close dates
|
||||
:param trades: DataFrame containing trades (requires columns close_time and profit_percent)
|
||||
:param date_col: Column in DataFrame to use for dates (defaults to 'close_time')
|
||||
:param trades: DataFrame containing trades (requires columns close_date and profit_percent)
|
||||
:param date_col: Column in DataFrame to use for dates (defaults to 'close_date')
|
||||
:param value_col: Column in DataFrame to use for values (defaults to 'profit_percent')
|
||||
:return: Tuple (float, highdate, lowdate) with absolute max drawdown, high and low time
|
||||
:raise: ValueError if trade-dataframe was found empty.
|
||||
|
@ -9,7 +9,7 @@ import utils_find_1st as utf1st
|
||||
from pandas import DataFrame
|
||||
|
||||
from freqtrade.configuration import TimeRange
|
||||
from freqtrade.constants import UNLIMITED_STAKE_AMOUNT
|
||||
from freqtrade.constants import UNLIMITED_STAKE_AMOUNT, DATETIME_PRINT_FORMAT
|
||||
from freqtrade.exceptions import OperationalException
|
||||
from freqtrade.data.history import get_timerange, load_data, refresh_data
|
||||
from freqtrade.strategy.interface import SellType
|
||||
@ -121,12 +121,9 @@ class Edge:
|
||||
|
||||
# Print timeframe
|
||||
min_date, max_date = get_timerange(preprocessed)
|
||||
logger.info(
|
||||
'Measuring data from %s up to %s (%s days) ...',
|
||||
min_date.isoformat(),
|
||||
max_date.isoformat(),
|
||||
(max_date - min_date).days
|
||||
)
|
||||
logger.info(f'Measuring data from {min_date.strftime(DATETIME_PRINT_FORMAT)} '
|
||||
f'up to {max_date.strftime(DATETIME_PRINT_FORMAT)} '
|
||||
f'({(max_date - min_date).days} days)..')
|
||||
headers = ['date', 'buy', 'open', 'close', 'sell', 'high', 'low']
|
||||
|
||||
trades: list = []
|
||||
@ -240,7 +237,7 @@ class Edge:
|
||||
# All returned values are relative, they are defined as ratios.
|
||||
stake = 0.015
|
||||
|
||||
result['trade_duration'] = result['close_time'] - result['open_time']
|
||||
result['trade_duration'] = result['close_date'] - result['open_date']
|
||||
|
||||
result['trade_duration'] = result['trade_duration'].map(
|
||||
lambda x: int(x.total_seconds() / 60))
|
||||
@ -430,10 +427,8 @@ class Edge:
|
||||
'stoploss': stoploss,
|
||||
'profit_ratio': '',
|
||||
'profit_abs': '',
|
||||
'open_time': date_column[open_trade_index],
|
||||
'close_time': date_column[exit_index],
|
||||
'open_index': start_point + open_trade_index,
|
||||
'close_index': start_point + exit_index,
|
||||
'open_date': date_column[open_trade_index],
|
||||
'close_date': date_column[exit_index],
|
||||
'trade_duration': '',
|
||||
'open_rate': round(open_price, 15),
|
||||
'close_rate': round(exit_price, 15),
|
||||
|
@ -13,6 +13,7 @@ from pandas import DataFrame
|
||||
|
||||
from freqtrade.configuration import (TimeRange, remove_credentials,
|
||||
validate_config_consistency)
|
||||
from freqtrade.constants import DATETIME_PRINT_FORMAT
|
||||
from freqtrade.data import history
|
||||
from freqtrade.data.converter import trim_dataframe
|
||||
from freqtrade.data.dataprovider import DataProvider
|
||||
@ -20,7 +21,7 @@ from freqtrade.exceptions import OperationalException
|
||||
from freqtrade.exchange import timeframe_to_minutes, timeframe_to_seconds
|
||||
from freqtrade.optimize.optimize_reports import (generate_backtest_stats,
|
||||
show_backtest_results,
|
||||
store_backtest_result)
|
||||
store_backtest_stats)
|
||||
from freqtrade.pairlist.pairlistmanager import PairListManager
|
||||
from freqtrade.persistence import Trade
|
||||
from freqtrade.resolvers import ExchangeResolver, StrategyResolver
|
||||
@ -36,14 +37,15 @@ class BacktestResult(NamedTuple):
|
||||
pair: str
|
||||
profit_percent: float
|
||||
profit_abs: float
|
||||
open_time: datetime
|
||||
close_time: datetime
|
||||
open_index: int
|
||||
close_index: int
|
||||
open_date: datetime
|
||||
open_rate: float
|
||||
open_fee: float
|
||||
close_date: datetime
|
||||
close_rate: float
|
||||
close_fee: float
|
||||
amount: float
|
||||
trade_duration: float
|
||||
open_at_end: bool
|
||||
open_rate: float
|
||||
close_rate: float
|
||||
sell_reason: SellType
|
||||
|
||||
|
||||
@ -135,10 +137,10 @@ class Backtesting:
|
||||
|
||||
min_date, max_date = history.get_timerange(data)
|
||||
|
||||
logger.info(
|
||||
'Loading data from %s up to %s (%s days)..',
|
||||
min_date.isoformat(), max_date.isoformat(), (max_date - min_date).days
|
||||
)
|
||||
logger.info(f'Loading data from {min_date.strftime(DATETIME_PRINT_FORMAT)} '
|
||||
f'up to {max_date.strftime(DATETIME_PRINT_FORMAT)} '
|
||||
f'({(max_date - min_date).days} days)..')
|
||||
|
||||
# Adjust startts forward if not enough data is available
|
||||
timerange.adjust_start_if_necessary(timeframe_to_seconds(self.timeframe),
|
||||
self.required_startup, min_date)
|
||||
@ -223,7 +225,7 @@ class Backtesting:
|
||||
open_rate=buy_row.open,
|
||||
open_date=buy_row.date,
|
||||
stake_amount=stake_amount,
|
||||
amount=stake_amount / buy_row.open,
|
||||
amount=round(stake_amount / buy_row.open, 8),
|
||||
fee_open=self.fee,
|
||||
fee_close=self.fee,
|
||||
is_open=True,
|
||||
@ -244,14 +246,15 @@ class Backtesting:
|
||||
return BacktestResult(pair=pair,
|
||||
profit_percent=trade.calc_profit_ratio(rate=closerate),
|
||||
profit_abs=trade.calc_profit(rate=closerate),
|
||||
open_time=buy_row.date,
|
||||
close_time=sell_row.date,
|
||||
trade_duration=trade_dur,
|
||||
open_index=buy_row.Index,
|
||||
close_index=sell_row.Index,
|
||||
open_at_end=False,
|
||||
open_date=buy_row.date,
|
||||
open_rate=buy_row.open,
|
||||
open_fee=self.fee,
|
||||
close_date=sell_row.date,
|
||||
close_rate=closerate,
|
||||
close_fee=self.fee,
|
||||
amount=trade.amount,
|
||||
trade_duration=trade_dur,
|
||||
open_at_end=False,
|
||||
sell_reason=sell.sell_type
|
||||
)
|
||||
if partial_ohlcv:
|
||||
@ -260,15 +263,16 @@ class Backtesting:
|
||||
bt_res = BacktestResult(pair=pair,
|
||||
profit_percent=trade.calc_profit_ratio(rate=sell_row.open),
|
||||
profit_abs=trade.calc_profit(rate=sell_row.open),
|
||||
open_time=buy_row.date,
|
||||
close_time=sell_row.date,
|
||||
open_date=buy_row.date,
|
||||
open_rate=buy_row.open,
|
||||
open_fee=self.fee,
|
||||
close_date=sell_row.date,
|
||||
close_rate=sell_row.open,
|
||||
close_fee=self.fee,
|
||||
amount=trade.amount,
|
||||
trade_duration=int((
|
||||
sell_row.date - buy_row.date).total_seconds() // 60),
|
||||
open_index=buy_row.Index,
|
||||
close_index=sell_row.Index,
|
||||
open_at_end=True,
|
||||
open_rate=buy_row.open,
|
||||
close_rate=sell_row.open,
|
||||
sell_reason=SellType.FORCE_SELL
|
||||
)
|
||||
logger.debug(f"{pair} - Force selling still open trade, "
|
||||
@ -354,8 +358,8 @@ class Backtesting:
|
||||
|
||||
if trade_entry:
|
||||
logger.debug(f"{pair} - Locking pair till "
|
||||
f"close_time={trade_entry.close_time}")
|
||||
lock_pair_until[pair] = trade_entry.close_time
|
||||
f"close_date={trade_entry.close_date}")
|
||||
lock_pair_until[pair] = trade_entry.close_date
|
||||
trades.append(trade_entry)
|
||||
else:
|
||||
# Set lock_pair_until to end of testing period if trade could not be closed
|
||||
@ -398,10 +402,9 @@ class Backtesting:
|
||||
preprocessed[pair] = trim_dataframe(df, timerange)
|
||||
min_date, max_date = history.get_timerange(preprocessed)
|
||||
|
||||
logger.info(
|
||||
'Backtesting with data from %s up to %s (%s days)..',
|
||||
min_date.isoformat(), max_date.isoformat(), (max_date - min_date).days
|
||||
)
|
||||
logger.info(f'Backtesting with data from {min_date.strftime(DATETIME_PRINT_FORMAT)} '
|
||||
f'up to {max_date.strftime(DATETIME_PRINT_FORMAT)} '
|
||||
f'({(max_date - min_date).days} days)..')
|
||||
# Execute backtest and print results
|
||||
all_results[self.strategy.get_strategy_name()] = self.backtest(
|
||||
processed=preprocessed,
|
||||
@ -412,8 +415,10 @@ class Backtesting:
|
||||
position_stacking=position_stacking,
|
||||
)
|
||||
|
||||
stats = generate_backtest_stats(self.config, data, all_results,
|
||||
min_date=min_date, max_date=max_date)
|
||||
if self.config.get('export', False):
|
||||
store_backtest_result(self.config['exportfilename'], all_results)
|
||||
store_backtest_stats(self.config['exportfilename'], stats)
|
||||
|
||||
# Show backtest results
|
||||
stats = generate_backtest_stats(self.config, data, all_results)
|
||||
show_backtest_results(self.config, stats)
|
||||
|
@ -25,6 +25,7 @@ from joblib import (Parallel, cpu_count, delayed, dump, load,
|
||||
wrap_non_picklable_objects)
|
||||
from pandas import DataFrame, isna, json_normalize
|
||||
|
||||
from freqtrade.constants import DATETIME_PRINT_FORMAT
|
||||
from freqtrade.data.converter import trim_dataframe
|
||||
from freqtrade.data.history import get_timerange
|
||||
from freqtrade.exceptions import OperationalException
|
||||
@ -642,10 +643,10 @@ class Hyperopt:
|
||||
preprocessed[pair] = trim_dataframe(df, timerange)
|
||||
min_date, max_date = get_timerange(data)
|
||||
|
||||
logger.info(
|
||||
'Hyperopting with data from %s up to %s (%s days)..',
|
||||
min_date.isoformat(), max_date.isoformat(), (max_date - min_date).days
|
||||
)
|
||||
logger.info(f'Hyperopting with data from {min_date.strftime(DATETIME_PRINT_FORMAT)} '
|
||||
f'up to {max_date.strftime(DATETIME_PRINT_FORMAT)} '
|
||||
f'({(max_date - min_date).days} days)..')
|
||||
|
||||
dump(preprocessed, self.data_pickle_file)
|
||||
|
||||
# We don't need exchange instance anymore while running hyperopt
|
||||
|
@ -43,7 +43,7 @@ class SharpeHyperOptLossDaily(IHyperOptLoss):
|
||||
normalize=True)
|
||||
|
||||
sum_daily = (
|
||||
results.resample(resample_freq, on='close_time').agg(
|
||||
results.resample(resample_freq, on='close_date').agg(
|
||||
{"profit_percent_after_slippage": sum}).reindex(t_index).fillna(0)
|
||||
)
|
||||
|
||||
|
@ -45,7 +45,7 @@ class SortinoHyperOptLossDaily(IHyperOptLoss):
|
||||
normalize=True)
|
||||
|
||||
sum_daily = (
|
||||
results.resample(resample_freq, on='close_time').agg(
|
||||
results.resample(resample_freq, on='close_date').agg(
|
||||
{"profit_percent_after_slippage": sum}).reindex(t_index).fillna(0)
|
||||
)
|
||||
|
||||
|
@ -1,46 +1,40 @@
|
||||
import logging
|
||||
from datetime import timedelta
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List
|
||||
|
||||
from arrow import Arrow
|
||||
from pandas import DataFrame
|
||||
from numpy import int64
|
||||
from tabulate import tabulate
|
||||
|
||||
from freqtrade.constants import DATETIME_PRINT_FORMAT, LAST_BT_RESULT_FN
|
||||
from freqtrade.data.btanalysis import calculate_max_drawdown, calculate_market_change
|
||||
from freqtrade.misc import file_dump_json
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def store_backtest_result(recordfilename: Path, all_results: Dict[str, DataFrame]) -> None:
|
||||
def store_backtest_stats(recordfilename: Path, stats: Dict[str, DataFrame]) -> None:
|
||||
"""
|
||||
Stores backtest results to file (one file per strategy)
|
||||
:param recordfilename: Destination filename
|
||||
:param all_results: Dict of Dataframes, one results dataframe per strategy
|
||||
Stores backtest results
|
||||
:param recordfilename: Path object, which can either be a filename or a directory.
|
||||
Filenames will be appended with a timestamp right before the suffix
|
||||
while for diectories, <directory>/backtest-result-<datetime>.json will be used as filename
|
||||
:param stats: Dataframe containing the backtesting statistics
|
||||
"""
|
||||
for strategy, results in all_results.items():
|
||||
records = backtest_result_to_list(results)
|
||||
if recordfilename.is_dir():
|
||||
filename = (recordfilename /
|
||||
f'backtest-result-{datetime.now().strftime("%Y-%m-%d_%H-%M-%S")}.json')
|
||||
else:
|
||||
filename = Path.joinpath(
|
||||
recordfilename.parent,
|
||||
f'{recordfilename.stem}-{datetime.now().strftime("%Y-%m-%d_%H-%M-%S")}'
|
||||
).with_suffix(recordfilename.suffix)
|
||||
file_dump_json(filename, stats)
|
||||
|
||||
if records:
|
||||
filename = recordfilename
|
||||
if len(all_results) > 1:
|
||||
# Inject strategy to filename
|
||||
filename = Path.joinpath(
|
||||
recordfilename.parent,
|
||||
f'{recordfilename.stem}-{strategy}').with_suffix(recordfilename.suffix)
|
||||
logger.info(f'Dumping backtest results to {filename}')
|
||||
file_dump_json(filename, records)
|
||||
|
||||
|
||||
def backtest_result_to_list(results: DataFrame) -> List[List]:
|
||||
"""
|
||||
Converts a list of Backtest-results to list
|
||||
:param results: Dataframe containing results for one strategy
|
||||
:return: List of Lists containing the trades
|
||||
"""
|
||||
return [[t.pair, t.profit_percent, t.open_time.timestamp(),
|
||||
t.close_time.timestamp(), t.open_index - 1, t.trade_duration,
|
||||
t.open_rate, t.close_rate, t.open_at_end, t.sell_reason.value]
|
||||
for index, t in results.iterrows()]
|
||||
latest_filename = Path.joinpath(filename.parent, LAST_BT_RESULT_FN)
|
||||
file_dump_json(latest_filename, {'latest_backtest': str(filename.name)})
|
||||
|
||||
|
||||
def _get_line_floatfmt() -> List[str]:
|
||||
@ -66,11 +60,12 @@ def _generate_result_line(result: DataFrame, max_open_trades: int, first_column:
|
||||
return {
|
||||
'key': first_column,
|
||||
'trades': len(result),
|
||||
'profit_mean': result['profit_percent'].mean(),
|
||||
'profit_mean_pct': result['profit_percent'].mean() * 100.0,
|
||||
'profit_mean': result['profit_percent'].mean() if len(result) > 0 else 0.0,
|
||||
'profit_mean_pct': result['profit_percent'].mean() * 100.0 if len(result) > 0 else 0.0,
|
||||
'profit_sum': result['profit_percent'].sum(),
|
||||
'profit_sum_pct': result['profit_percent'].sum() * 100.0,
|
||||
'profit_total_abs': result['profit_abs'].sum(),
|
||||
'profit_total': result['profit_percent'].sum() / max_open_trades,
|
||||
'profit_total_pct': result['profit_percent'].sum() * 100.0 / max_open_trades,
|
||||
'duration_avg': str(timedelta(
|
||||
minutes=round(result['trade_duration'].mean()))
|
||||
@ -141,7 +136,7 @@ def generate_sell_reason_stats(max_open_trades: int, results: DataFrame) -> List
|
||||
'profit_sum': profit_sum,
|
||||
'profit_sum_pct': round(profit_sum * 100, 2),
|
||||
'profit_total_abs': result['profit_abs'].sum(),
|
||||
'profit_pct_total': profit_percent_tot,
|
||||
'profit_total_pct': profit_percent_tot,
|
||||
}
|
||||
)
|
||||
return tabular_data
|
||||
@ -189,18 +184,48 @@ def generate_edge_table(results: dict) -> str:
|
||||
floatfmt=floatfmt, tablefmt="orgtbl", stralign="right") # type: ignore
|
||||
|
||||
|
||||
def generate_daily_stats(results: DataFrame) -> Dict[str, Any]:
|
||||
daily_profit = results.resample('1d', on='close_date')['profit_percent'].sum()
|
||||
worst = min(daily_profit)
|
||||
best = max(daily_profit)
|
||||
winning_days = sum(daily_profit > 0)
|
||||
draw_days = sum(daily_profit == 0)
|
||||
losing_days = sum(daily_profit < 0)
|
||||
|
||||
winning_trades = results.loc[results['profit_percent'] > 0]
|
||||
losing_trades = results.loc[results['profit_percent'] < 0]
|
||||
|
||||
return {
|
||||
'backtest_best_day': best,
|
||||
'backtest_worst_day': worst,
|
||||
'winning_days': winning_days,
|
||||
'draw_days': draw_days,
|
||||
'losing_days': losing_days,
|
||||
'winner_holding_avg': (timedelta(minutes=round(winning_trades['trade_duration'].mean()))
|
||||
if not winning_trades.empty else timedelta()),
|
||||
'loser_holding_avg': (timedelta(minutes=round(losing_trades['trade_duration'].mean()))
|
||||
if not losing_trades.empty else timedelta()),
|
||||
}
|
||||
|
||||
|
||||
def generate_backtest_stats(config: Dict, btdata: Dict[str, DataFrame],
|
||||
all_results: Dict[str, DataFrame]) -> Dict[str, Any]:
|
||||
all_results: Dict[str, DataFrame],
|
||||
min_date: Arrow, max_date: Arrow
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
:param config: Configuration object used for backtest
|
||||
:param btdata: Backtest data
|
||||
:param all_results: backtest result - dictionary with { Strategy: results}.
|
||||
:param min_date: Backtest start date
|
||||
:param max_date: Backtest end date
|
||||
:return:
|
||||
Dictionary containing results per strategy and a stratgy summary.
|
||||
"""
|
||||
stake_currency = config['stake_currency']
|
||||
max_open_trades = config['max_open_trades']
|
||||
result: Dict[str, Any] = {'strategy': {}}
|
||||
market_change = calculate_market_change(btdata, 'close')
|
||||
|
||||
for strategy, results in all_results.items():
|
||||
|
||||
pair_results = generate_pair_metrics(btdata, stake_currency=stake_currency,
|
||||
@ -212,14 +237,57 @@ def generate_backtest_stats(config: Dict, btdata: Dict[str, DataFrame],
|
||||
max_open_trades=max_open_trades,
|
||||
results=results.loc[results['open_at_end']],
|
||||
skip_nan=True)
|
||||
daily_stats = generate_daily_stats(results)
|
||||
|
||||
results['open_timestamp'] = results['open_date'].astype(int64) // 1e6
|
||||
results['close_timestamp'] = results['close_date'].astype(int64) // 1e6
|
||||
|
||||
backtest_days = (max_date - min_date).days
|
||||
strat_stats = {
|
||||
'trades': backtest_result_to_list(results),
|
||||
'trades': results.to_dict(orient='records'),
|
||||
'results_per_pair': pair_results,
|
||||
'sell_reason_summary': sell_reason_stats,
|
||||
'left_open_trades': left_open_results,
|
||||
}
|
||||
'total_trades': len(results),
|
||||
'profit_mean': results['profit_percent'].mean(),
|
||||
'profit_total': results['profit_percent'].sum(),
|
||||
'profit_total_abs': results['profit_abs'].sum(),
|
||||
'backtest_start': min_date.datetime,
|
||||
'backtest_start_ts': min_date.timestamp * 1000,
|
||||
'backtest_end': max_date.datetime,
|
||||
'backtest_end_ts': max_date.timestamp * 1000,
|
||||
'backtest_days': backtest_days,
|
||||
|
||||
'trades_per_day': round(len(results) / backtest_days, 2) if backtest_days > 0 else None,
|
||||
'market_change': market_change,
|
||||
'pairlist': list(btdata.keys()),
|
||||
'stake_amount': config['stake_amount'],
|
||||
'stake_currency': config['stake_currency'],
|
||||
'max_open_trades': config['max_open_trades'],
|
||||
'timeframe': config['timeframe'],
|
||||
**daily_stats,
|
||||
}
|
||||
result['strategy'][strategy] = strat_stats
|
||||
|
||||
try:
|
||||
max_drawdown, drawdown_start, drawdown_end = calculate_max_drawdown(
|
||||
results, value_col='profit_percent')
|
||||
strat_stats.update({
|
||||
'max_drawdown': max_drawdown,
|
||||
'drawdown_start': drawdown_start,
|
||||
'drawdown_start_ts': drawdown_start.timestamp() * 1000,
|
||||
'drawdown_end': drawdown_end,
|
||||
'drawdown_end_ts': drawdown_end.timestamp() * 1000,
|
||||
})
|
||||
except ValueError:
|
||||
strat_stats.update({
|
||||
'max_drawdown': 0.0,
|
||||
'drawdown_start': datetime(1970, 1, 1, tzinfo=timezone.utc),
|
||||
'drawdown_start_ts': 0,
|
||||
'drawdown_end': datetime(1970, 1, 1, tzinfo=timezone.utc),
|
||||
'drawdown_end_ts': 0,
|
||||
})
|
||||
|
||||
strategy_results = generate_strategy_metrics(stake_currency=stake_currency,
|
||||
max_open_trades=max_open_trades,
|
||||
all_results=all_results)
|
||||
@ -273,7 +341,7 @@ def text_table_sell_reason(sell_reason_stats: List[Dict[str, Any]], stake_curren
|
||||
|
||||
output = [[
|
||||
t['sell_reason'], t['trades'], t['wins'], t['draws'], t['losses'],
|
||||
t['profit_mean_pct'], t['profit_sum_pct'], t['profit_total_abs'], t['profit_pct_total'],
|
||||
t['profit_mean_pct'], t['profit_sum_pct'], t['profit_total_abs'], t['profit_total_pct'],
|
||||
] for t in sell_reason_stats]
|
||||
return tabulate(output, headers=headers, tablefmt="orgtbl", stralign="right")
|
||||
|
||||
@ -298,6 +366,35 @@ def text_table_strategy(strategy_results, stake_currency: str) -> str:
|
||||
floatfmt=floatfmt, tablefmt="orgtbl", stralign="right")
|
||||
|
||||
|
||||
def text_table_add_metrics(strat_results: Dict) -> str:
|
||||
if len(strat_results['trades']) > 0:
|
||||
min_trade = min(strat_results['trades'], key=lambda x: x['open_date'])
|
||||
metrics = [
|
||||
('Backtesting from', strat_results['backtest_start'].strftime(DATETIME_PRINT_FORMAT)),
|
||||
('Backtesting to', strat_results['backtest_end'].strftime(DATETIME_PRINT_FORMAT)),
|
||||
('Total trades', strat_results['total_trades']),
|
||||
('First trade', min_trade['open_date'].strftime(DATETIME_PRINT_FORMAT)),
|
||||
('First trade Pair', min_trade['pair']),
|
||||
('Total Profit %', f"{round(strat_results['profit_total'] * 100, 2)}%"),
|
||||
('Trades per day', strat_results['trades_per_day']),
|
||||
('Best day', f"{round(strat_results['backtest_best_day'] * 100, 2)}%"),
|
||||
('Worst day', f"{round(strat_results['backtest_worst_day'] * 100, 2)}%"),
|
||||
('Days win/draw/lose', f"{strat_results['winning_days']} / "
|
||||
f"{strat_results['draw_days']} / {strat_results['losing_days']}"),
|
||||
('Avg. Duration Winners', f"{strat_results['winner_holding_avg']}"),
|
||||
('Avg. Duration Loser', f"{strat_results['loser_holding_avg']}"),
|
||||
('', ''), # Empty line to improve readability
|
||||
('Max Drawdown', f"{round(strat_results['max_drawdown'] * 100, 2)}%"),
|
||||
('Drawdown Start', strat_results['drawdown_start'].strftime(DATETIME_PRINT_FORMAT)),
|
||||
('Drawdown End', strat_results['drawdown_end'].strftime(DATETIME_PRINT_FORMAT)),
|
||||
('Market change', f"{round(strat_results['market_change'] * 100, 2)}%"),
|
||||
]
|
||||
|
||||
return tabulate(metrics, headers=["Metric", "Value"], tablefmt="orgtbl")
|
||||
else:
|
||||
return ''
|
||||
|
||||
|
||||
def show_backtest_results(config: Dict, backtest_stats: Dict):
|
||||
stake_currency = config['stake_currency']
|
||||
|
||||
@ -312,15 +409,21 @@ def show_backtest_results(config: Dict, backtest_stats: Dict):
|
||||
|
||||
table = text_table_sell_reason(sell_reason_stats=results['sell_reason_summary'],
|
||||
stake_currency=stake_currency)
|
||||
if isinstance(table, str):
|
||||
if isinstance(table, str) and len(table) > 0:
|
||||
print(' SELL REASON STATS '.center(len(table.splitlines()[0]), '='))
|
||||
print(table)
|
||||
|
||||
table = text_table_bt_results(results['left_open_trades'], stake_currency=stake_currency)
|
||||
if isinstance(table, str):
|
||||
if isinstance(table, str) and len(table) > 0:
|
||||
print(' LEFT OPEN TRADES REPORT '.center(len(table.splitlines()[0]), '='))
|
||||
print(table)
|
||||
if isinstance(table, str):
|
||||
|
||||
table = text_table_add_metrics(results)
|
||||
if isinstance(table, str) and len(table) > 0:
|
||||
print(' SUMMARY METRICS '.center(len(table.splitlines()[0]), '='))
|
||||
print(table)
|
||||
|
||||
if isinstance(table, str) and len(table) > 0:
|
||||
print('=' * len(table.splitlines()[0]))
|
||||
print()
|
||||
|
||||
|
@ -8,7 +8,8 @@ from freqtrade.configuration import TimeRange
|
||||
from freqtrade.data.btanalysis import (calculate_max_drawdown,
|
||||
combine_dataframes_with_mean,
|
||||
create_cum_profit,
|
||||
extract_trades_of_period, load_trades)
|
||||
extract_trades_of_period,
|
||||
load_trades)
|
||||
from freqtrade.data.converter import trim_dataframe
|
||||
from freqtrade.data.dataprovider import DataProvider
|
||||
from freqtrade.data.history import load_data
|
||||
@ -53,19 +54,22 @@ def init_plotscript(config):
|
||||
)
|
||||
|
||||
no_trades = False
|
||||
filename = config.get('exportfilename')
|
||||
if config.get('no_trades', False):
|
||||
no_trades = True
|
||||
elif not config['exportfilename'].is_file() and config['trade_source'] == 'file':
|
||||
logger.warning("Backtest file is missing skipping trades.")
|
||||
no_trades = True
|
||||
elif config['trade_source'] == 'file':
|
||||
if not filename.is_dir() and not filename.is_file():
|
||||
logger.warning("Backtest file is missing skipping trades.")
|
||||
no_trades = True
|
||||
|
||||
trades = load_trades(
|
||||
config['trade_source'],
|
||||
db_url=config.get('db_url'),
|
||||
exportfilename=config.get('exportfilename'),
|
||||
no_trades=no_trades
|
||||
exportfilename=filename,
|
||||
no_trades=no_trades,
|
||||
strategy=config.get("strategy"),
|
||||
)
|
||||
trades = trim_dataframe(trades, timerange, 'open_time')
|
||||
trades = trim_dataframe(trades, timerange, 'open_date')
|
||||
|
||||
return {"ohlcv": data,
|
||||
"trades": trades,
|
||||
@ -165,10 +169,11 @@ def plot_trades(fig, trades: pd.DataFrame) -> make_subplots:
|
||||
if trades is not None and len(trades) > 0:
|
||||
# Create description for sell summarizing the trade
|
||||
trades['desc'] = trades.apply(lambda row: f"{round(row['profit_percent'] * 100, 1)}%, "
|
||||
f"{row['sell_reason']}, {row['duration']} min",
|
||||
f"{row['sell_reason']}, "
|
||||
f"{row['trade_duration']} min",
|
||||
axis=1)
|
||||
trade_buys = go.Scatter(
|
||||
x=trades["open_time"],
|
||||
x=trades["open_date"],
|
||||
y=trades["open_rate"],
|
||||
mode='markers',
|
||||
name='Trade buy',
|
||||
@ -183,7 +188,7 @@ def plot_trades(fig, trades: pd.DataFrame) -> make_subplots:
|
||||
)
|
||||
|
||||
trade_sells = go.Scatter(
|
||||
x=trades.loc[trades['profit_percent'] > 0, "close_time"],
|
||||
x=trades.loc[trades['profit_percent'] > 0, "close_date"],
|
||||
y=trades.loc[trades['profit_percent'] > 0, "close_rate"],
|
||||
text=trades.loc[trades['profit_percent'] > 0, "desc"],
|
||||
mode='markers',
|
||||
@ -196,7 +201,7 @@ def plot_trades(fig, trades: pd.DataFrame) -> make_subplots:
|
||||
)
|
||||
)
|
||||
trade_sells_loss = go.Scatter(
|
||||
x=trades.loc[trades['profit_percent'] <= 0, "close_time"],
|
||||
x=trades.loc[trades['profit_percent'] <= 0, "close_date"],
|
||||
y=trades.loc[trades['profit_percent'] <= 0, "close_rate"],
|
||||
text=trades.loc[trades['profit_percent'] <= 0, "desc"],
|
||||
mode='markers',
|
||||
@ -510,7 +515,7 @@ def plot_profit(config: Dict[str, Any]) -> None:
|
||||
# Remove open pairs - we don't know the profit yet so can't calculate profit for these.
|
||||
# Also, If only one open pair is left, then the profit-generation would fail.
|
||||
trades = trades[(trades['pair'].isin(plot_elements["pairs"]))
|
||||
& (~trades['close_time'].isnull())
|
||||
& (~trades['close_date'].isnull())
|
||||
]
|
||||
if len(trades) == 0:
|
||||
raise OperationalException("No trades found, cannot generate Profit-plot without "
|
||||
|
@ -16,6 +16,7 @@ from werkzeug.security import safe_str_cmp
|
||||
from werkzeug.serving import make_server
|
||||
|
||||
from freqtrade.__init__ import __version__
|
||||
from freqtrade.constants import DATETIME_PRINT_FORMAT
|
||||
from freqtrade.rpc.rpc import RPC, RPCException
|
||||
from freqtrade.rpc.fiat_convert import CryptoToFiatConverter
|
||||
|
||||
@ -32,7 +33,7 @@ class ArrowJSONEncoder(JSONEncoder):
|
||||
elif isinstance(obj, date):
|
||||
return obj.strftime("%Y-%m-%d")
|
||||
elif isinstance(obj, datetime):
|
||||
return obj.strftime("%Y-%m-%d %H:%M:%S")
|
||||
return obj.strftime(DATETIME_PRINT_FORMAT)
|
||||
iterable = iter(obj)
|
||||
except TypeError:
|
||||
pass
|
||||
|
@ -44,6 +44,10 @@ class SellType(Enum):
|
||||
EMERGENCY_SELL = "emergency_sell"
|
||||
NONE = ""
|
||||
|
||||
def __str__(self):
|
||||
# explicitly convert to String to help with exporting data.
|
||||
return self.value
|
||||
|
||||
|
||||
class SellCheckTuple(NamedTuple):
|
||||
"""
|
||||
|
@ -34,7 +34,7 @@
|
||||
"# config = Configuration.from_files([\"config.json\"])\n",
|
||||
"\n",
|
||||
"# Define some constants\n",
|
||||
"config[\"ticker_interval\"] = \"5m\"\n",
|
||||
"config[\"timeframe\"] = \"5m\"\n",
|
||||
"# Name of the strategy class\n",
|
||||
"config[\"strategy\"] = \"SampleStrategy\"\n",
|
||||
"# Location of the data\n",
|
||||
@ -53,7 +53,7 @@
|
||||
"from freqtrade.data.history import load_pair_history\n",
|
||||
"\n",
|
||||
"candles = load_pair_history(datadir=data_location,\n",
|
||||
" timeframe=config[\"ticker_interval\"],\n",
|
||||
" timeframe=config[\"timeframe\"],\n",
|
||||
" pair=pair)\n",
|
||||
"\n",
|
||||
"# Confirm success\n",
|
||||
@ -136,10 +136,51 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from freqtrade.data.btanalysis import load_backtest_data\n",
|
||||
"from freqtrade.data.btanalysis import load_backtest_data, load_backtest_stats\n",
|
||||
"\n",
|
||||
"# Load backtest results\n",
|
||||
"trades = load_backtest_data(config[\"user_data_dir\"] / \"backtest_results/backtest-result.json\")\n",
|
||||
"# if backtest_dir points to a directory, it'll automatically load the last backtest file.\n",
|
||||
"backtest_dir = config[\"user_data_dir\"] / \"backtest_results\"\n",
|
||||
"# backtest_dir can also point to a specific file \n",
|
||||
"# backtest_dir = config[\"user_data_dir\"] / \"backtest_results/backtest-result-2020-07-01_20-04-22.json\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# You can get the full backtest statistics by using the following command.\n",
|
||||
"# This contains all information used to generate the backtest result.\n",
|
||||
"stats = load_backtest_stats(backtest_dir)\n",
|
||||
"\n",
|
||||
"strategy = 'SampleStrategy'\n",
|
||||
"# All statistics are available per strategy, so if `--strategy-list` was used during backtest, this will be reflected here as well.\n",
|
||||
"# Example usages:\n",
|
||||
"print(stats['strategy'][strategy]['results_per_pair'])\n",
|
||||
"# Get pairlist used for this backtest\n",
|
||||
"print(stats['strategy'][strategy]['pairlist'])\n",
|
||||
"# Get market change (average change of all pairs from start to end of the backtest period)\n",
|
||||
"print(stats['strategy'][strategy]['market_change'])\n",
|
||||
"# Maximum drawdown ()\n",
|
||||
"print(stats['strategy'][strategy]['max_drawdown'])\n",
|
||||
"# Maximum drawdown start and end\n",
|
||||
"print(stats['strategy'][strategy]['drawdown_start'])\n",
|
||||
"print(stats['strategy'][strategy]['drawdown_end'])\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Get strategy comparison (only relevant if multiple strategies were compared)\n",
|
||||
"print(stats['strategy_comparison'])\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Load backtested trades as dataframe\n",
|
||||
"trades = load_backtest_data(backtest_dir)\n",
|
||||
"\n",
|
||||
"# Show value-counts per pair\n",
|
||||
"trades.groupby(\"pair\")[\"sell_reason\"].value_counts()"
|
||||
|
@ -181,7 +181,8 @@ def create_mock_trades(fee):
|
||||
fee_close=fee.return_value,
|
||||
open_rate=0.123,
|
||||
exchange='bittrex',
|
||||
open_order_id='dry_run_buy_12345'
|
||||
open_order_id='dry_run_buy_12345',
|
||||
strategy='DefaultStrategy',
|
||||
)
|
||||
Trade.session.add(trade)
|
||||
|
||||
@ -197,7 +198,8 @@ def create_mock_trades(fee):
|
||||
close_profit=0.005,
|
||||
exchange='bittrex',
|
||||
is_open=False,
|
||||
open_order_id='dry_run_sell_12345'
|
||||
open_order_id='dry_run_sell_12345',
|
||||
strategy='DefaultStrategy',
|
||||
)
|
||||
Trade.session.add(trade)
|
||||
|
||||
@ -225,7 +227,8 @@ def create_mock_trades(fee):
|
||||
fee_close=fee.return_value,
|
||||
open_rate=0.123,
|
||||
exchange='bittrex',
|
||||
open_order_id='prod_buy_12345'
|
||||
open_order_id='prod_buy_12345',
|
||||
strategy='DefaultStrategy',
|
||||
)
|
||||
Trade.session.add(trade)
|
||||
|
||||
|
@ -6,24 +6,48 @@ from arrow import Arrow
|
||||
from pandas import DataFrame, DateOffset, Timestamp, to_datetime
|
||||
|
||||
from freqtrade.configuration import TimeRange
|
||||
from freqtrade.constants import LAST_BT_RESULT_FN
|
||||
from freqtrade.data.btanalysis import (BT_DATA_COLUMNS,
|
||||
analyze_trade_parallelism,
|
||||
calculate_market_change,
|
||||
calculate_max_drawdown,
|
||||
combine_dataframes_with_mean,
|
||||
create_cum_profit,
|
||||
extract_trades_of_period,
|
||||
get_latest_backtest_filename,
|
||||
load_backtest_data, load_trades,
|
||||
load_trades_from_db)
|
||||
from freqtrade.data.history import load_data, load_pair_history
|
||||
from freqtrade.optimize.backtesting import BacktestResult
|
||||
from tests.conftest import create_mock_trades
|
||||
|
||||
|
||||
def test_load_backtest_data(testdatadir):
|
||||
def test_get_latest_backtest_filename(testdatadir, mocker):
|
||||
with pytest.raises(ValueError, match=r"Directory .* does not exist\."):
|
||||
get_latest_backtest_filename(testdatadir / 'does_not_exist')
|
||||
|
||||
with pytest.raises(ValueError,
|
||||
match=r"Directory .* does not seem to contain .*"):
|
||||
get_latest_backtest_filename(testdatadir.parent)
|
||||
|
||||
res = get_latest_backtest_filename(testdatadir)
|
||||
assert res == 'backtest-result_new.json'
|
||||
|
||||
res = get_latest_backtest_filename(str(testdatadir))
|
||||
assert res == 'backtest-result_new.json'
|
||||
|
||||
mocker.patch("freqtrade.data.btanalysis.json_load", return_value={})
|
||||
|
||||
with pytest.raises(ValueError, match=r"Invalid '.last_result.json' format."):
|
||||
get_latest_backtest_filename(testdatadir)
|
||||
|
||||
|
||||
def test_load_backtest_data_old_format(testdatadir):
|
||||
|
||||
filename = testdatadir / "backtest-result_test.json"
|
||||
bt_data = load_backtest_data(filename)
|
||||
assert isinstance(bt_data, DataFrame)
|
||||
assert list(bt_data.columns) == BT_DATA_COLUMNS + ["profit"]
|
||||
assert list(bt_data.columns) == BT_DATA_COLUMNS + ["profit_abs"]
|
||||
assert len(bt_data) == 179
|
||||
|
||||
# Test loading from string (must yield same result)
|
||||
@ -34,6 +58,49 @@ def test_load_backtest_data(testdatadir):
|
||||
load_backtest_data(str("filename") + "nofile")
|
||||
|
||||
|
||||
def test_load_backtest_data_new_format(testdatadir):
|
||||
|
||||
filename = testdatadir / "backtest-result_new.json"
|
||||
bt_data = load_backtest_data(filename)
|
||||
assert isinstance(bt_data, DataFrame)
|
||||
assert set(bt_data.columns) == set(list(BacktestResult._fields) + ["profit_abs"])
|
||||
assert len(bt_data) == 179
|
||||
|
||||
# Test loading from string (must yield same result)
|
||||
bt_data2 = load_backtest_data(str(filename))
|
||||
assert bt_data.equals(bt_data2)
|
||||
|
||||
# Test loading from folder (must yield same result)
|
||||
bt_data3 = load_backtest_data(testdatadir)
|
||||
assert bt_data.equals(bt_data3)
|
||||
|
||||
with pytest.raises(ValueError, match=r"File .* does not exist\."):
|
||||
load_backtest_data(str("filename") + "nofile")
|
||||
|
||||
with pytest.raises(ValueError, match=r"Unknown dataformat."):
|
||||
load_backtest_data(testdatadir / LAST_BT_RESULT_FN)
|
||||
|
||||
|
||||
def test_load_backtest_data_multi(testdatadir):
|
||||
|
||||
filename = testdatadir / "backtest-result_multistrat.json"
|
||||
for strategy in ('DefaultStrategy', 'TestStrategy'):
|
||||
bt_data = load_backtest_data(filename, strategy=strategy)
|
||||
assert isinstance(bt_data, DataFrame)
|
||||
assert set(bt_data.columns) == set(list(BacktestResult._fields) + ["profit_abs"])
|
||||
assert len(bt_data) == 179
|
||||
|
||||
# Test loading from string (must yield same result)
|
||||
bt_data2 = load_backtest_data(str(filename), strategy=strategy)
|
||||
assert bt_data.equals(bt_data2)
|
||||
|
||||
with pytest.raises(ValueError, match=r"Strategy XYZ not available in the backtest result\."):
|
||||
load_backtest_data(filename, strategy='XYZ')
|
||||
|
||||
with pytest.raises(ValueError, match=r"Detected backtest result with more than one strategy.*"):
|
||||
load_backtest_data(filename)
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("init_persistence")
|
||||
def test_load_trades_from_db(default_conf, fee, mocker):
|
||||
|
||||
@ -46,12 +113,16 @@ def test_load_trades_from_db(default_conf, fee, mocker):
|
||||
assert len(trades) == 4
|
||||
assert isinstance(trades, DataFrame)
|
||||
assert "pair" in trades.columns
|
||||
assert "open_time" in trades.columns
|
||||
assert "open_date" in trades.columns
|
||||
assert "profit_percent" in trades.columns
|
||||
|
||||
for col in BT_DATA_COLUMNS:
|
||||
if col not in ['index', 'open_at_end']:
|
||||
assert col in trades.columns
|
||||
trades = load_trades_from_db(db_url=default_conf['db_url'], strategy='DefaultStrategy')
|
||||
assert len(trades) == 3
|
||||
trades = load_trades_from_db(db_url=default_conf['db_url'], strategy='NoneStrategy')
|
||||
assert len(trades) == 0
|
||||
|
||||
|
||||
def test_extract_trades_of_period(testdatadir):
|
||||
@ -66,13 +137,13 @@ def test_extract_trades_of_period(testdatadir):
|
||||
{'pair': [pair, pair, pair, pair],
|
||||
'profit_percent': [0.0, 0.1, -0.2, -0.5],
|
||||
'profit_abs': [0.0, 1, -2, -5],
|
||||
'open_time': to_datetime([Arrow(2017, 11, 13, 15, 40, 0).datetime,
|
||||
'open_date': to_datetime([Arrow(2017, 11, 13, 15, 40, 0).datetime,
|
||||
Arrow(2017, 11, 14, 9, 41, 0).datetime,
|
||||
Arrow(2017, 11, 14, 14, 20, 0).datetime,
|
||||
Arrow(2017, 11, 15, 3, 40, 0).datetime,
|
||||
], utc=True
|
||||
),
|
||||
'close_time': to_datetime([Arrow(2017, 11, 13, 16, 40, 0).datetime,
|
||||
'close_date': to_datetime([Arrow(2017, 11, 13, 16, 40, 0).datetime,
|
||||
Arrow(2017, 11, 14, 10, 41, 0).datetime,
|
||||
Arrow(2017, 11, 14, 15, 25, 0).datetime,
|
||||
Arrow(2017, 11, 15, 3, 55, 0).datetime,
|
||||
@ -81,10 +152,10 @@ def test_extract_trades_of_period(testdatadir):
|
||||
trades1 = extract_trades_of_period(data, trades)
|
||||
# First and last trade are dropped as they are out of range
|
||||
assert len(trades1) == 2
|
||||
assert trades1.iloc[0].open_time == Arrow(2017, 11, 14, 9, 41, 0).datetime
|
||||
assert trades1.iloc[0].close_time == Arrow(2017, 11, 14, 10, 41, 0).datetime
|
||||
assert trades1.iloc[-1].open_time == Arrow(2017, 11, 14, 14, 20, 0).datetime
|
||||
assert trades1.iloc[-1].close_time == Arrow(2017, 11, 14, 15, 25, 0).datetime
|
||||
assert trades1.iloc[0].open_date == Arrow(2017, 11, 14, 9, 41, 0).datetime
|
||||
assert trades1.iloc[0].close_date == Arrow(2017, 11, 14, 10, 41, 0).datetime
|
||||
assert trades1.iloc[-1].open_date == Arrow(2017, 11, 14, 14, 20, 0).datetime
|
||||
assert trades1.iloc[-1].close_date == Arrow(2017, 11, 14, 15, 25, 0).datetime
|
||||
|
||||
|
||||
def test_analyze_trade_parallelism(default_conf, mocker, testdatadir):
|
||||
@ -105,7 +176,8 @@ def test_load_trades(default_conf, mocker):
|
||||
load_trades("DB",
|
||||
db_url=default_conf.get('db_url'),
|
||||
exportfilename=default_conf.get('exportfilename'),
|
||||
no_trades=False
|
||||
no_trades=False,
|
||||
strategy="DefaultStrategy",
|
||||
)
|
||||
|
||||
assert db_mock.call_count == 1
|
||||
@ -135,6 +207,14 @@ def test_load_trades(default_conf, mocker):
|
||||
assert bt_mock.call_count == 0
|
||||
|
||||
|
||||
def test_calculate_market_change(testdatadir):
|
||||
pairs = ["ETH/BTC", "ADA/BTC"]
|
||||
data = load_data(datadir=testdatadir, pairs=pairs, timeframe='5m')
|
||||
result = calculate_market_change(data)
|
||||
assert isinstance(result, float)
|
||||
assert pytest.approx(result) == 0.00955514
|
||||
|
||||
|
||||
def test_combine_dataframes_with_mean(testdatadir):
|
||||
pairs = ["ETH/BTC", "ADA/BTC"]
|
||||
data = load_data(datadir=testdatadir, pairs=pairs, timeframe='5m')
|
||||
@ -165,7 +245,7 @@ def test_create_cum_profit1(testdatadir):
|
||||
filename = testdatadir / "backtest-result_test.json"
|
||||
bt_data = load_backtest_data(filename)
|
||||
# Move close-time to "off" the candle, to make sure the logic still works
|
||||
bt_data.loc[:, 'close_time'] = bt_data.loc[:, 'close_time'] + DateOffset(seconds=20)
|
||||
bt_data.loc[:, 'close_date'] = bt_data.loc[:, 'close_date'] + DateOffset(seconds=20)
|
||||
timerange = TimeRange.parse_timerange("20180110-20180112")
|
||||
|
||||
df = load_pair_history(pair="TRX/BTC", timeframe='5m',
|
||||
@ -204,11 +284,11 @@ def test_calculate_max_drawdown2():
|
||||
-0.033961, 0.010680, 0.010886, -0.029274, 0.011178, 0.010693, 0.010711]
|
||||
|
||||
dates = [Arrow(2020, 1, 1).shift(days=i) for i in range(len(values))]
|
||||
df = DataFrame(zip(values, dates), columns=['profit', 'open_time'])
|
||||
df = DataFrame(zip(values, dates), columns=['profit', 'open_date'])
|
||||
# sort by profit and reset index
|
||||
df = df.sort_values('profit').reset_index(drop=True)
|
||||
df1 = df.copy()
|
||||
drawdown, h, low = calculate_max_drawdown(df, date_col='open_time', value_col='profit')
|
||||
drawdown, h, low = calculate_max_drawdown(df, date_col='open_date', value_col='profit')
|
||||
# Ensure df has not been altered.
|
||||
assert df.equals(df1)
|
||||
|
||||
@ -217,6 +297,6 @@ def test_calculate_max_drawdown2():
|
||||
assert h < low
|
||||
assert drawdown == 0.091755
|
||||
|
||||
df = DataFrame(zip(values[:5], dates[:5]), columns=['profit', 'open_time'])
|
||||
df = DataFrame(zip(values[:5], dates[:5]), columns=['profit', 'open_date'])
|
||||
with pytest.raises(ValueError, match='No losing trade, therefore no drawdown.'):
|
||||
calculate_max_drawdown(df, date_col='open_time', value_col='profit')
|
||||
calculate_max_drawdown(df, date_col='open_date', value_col='profit')
|
||||
|
@ -36,7 +36,7 @@ def _backup_file(file: Path, copy_file: bool = False) -> None:
|
||||
"""
|
||||
Backup existing file to avoid deleting the user file
|
||||
:param file: complete path to the file
|
||||
:param touch_file: create an empty file in replacement
|
||||
:param copy_file: keep file in place too.
|
||||
:return: None
|
||||
"""
|
||||
file_swp = str(file) + '.swp'
|
||||
|
@ -163,8 +163,8 @@ def test_edge_results(edge_conf, mocker, caplog, data) -> None:
|
||||
for c, trade in enumerate(data.trades):
|
||||
res = results.iloc[c]
|
||||
assert res.exit_type == trade.sell_reason
|
||||
assert res.open_time == _get_frame_time_from_offset(trade.open_tick).replace(tzinfo=None)
|
||||
assert res.close_time == _get_frame_time_from_offset(trade.close_tick).replace(tzinfo=None)
|
||||
assert res.open_date == _get_frame_time_from_offset(trade.open_tick).replace(tzinfo=None)
|
||||
assert res.close_date == _get_frame_time_from_offset(trade.close_tick).replace(tzinfo=None)
|
||||
|
||||
|
||||
def test_adjust(mocker, edge_conf):
|
||||
@ -354,10 +354,8 @@ def test_process_expectancy(mocker, edge_conf, fee, risk_reward_ratio, expectanc
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:05:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:10:00.000000000'),
|
||||
'open_index': 1,
|
||||
'close_index': 1,
|
||||
'open_date': np.datetime64('2018-10-03T00:05:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:10:00.000000000'),
|
||||
'trade_duration': '',
|
||||
'open_rate': 17,
|
||||
'close_rate': 17,
|
||||
@ -367,10 +365,8 @@ def test_process_expectancy(mocker, edge_conf, fee, risk_reward_ratio, expectanc
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_index': 4,
|
||||
'close_index': 4,
|
||||
'open_date': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'trade_duration': '',
|
||||
'open_rate': 20,
|
||||
'close_rate': 20,
|
||||
@ -380,10 +376,8 @@ def test_process_expectancy(mocker, edge_conf, fee, risk_reward_ratio, expectanc
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:30:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:40:00.000000000'),
|
||||
'open_index': 6,
|
||||
'close_index': 7,
|
||||
'open_date': np.datetime64('2018-10-03T00:30:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:40:00.000000000'),
|
||||
'trade_duration': '',
|
||||
'open_rate': 26,
|
||||
'close_rate': 34,
|
||||
@ -424,8 +418,8 @@ def test_process_expectancy_remove_pumps(mocker, edge_conf, fee,):
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:05:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:10:00.000000000'),
|
||||
'open_date': np.datetime64('2018-10-03T00:05:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:10:00.000000000'),
|
||||
'open_index': 1,
|
||||
'close_index': 1,
|
||||
'trade_duration': '',
|
||||
@ -437,8 +431,8 @@ def test_process_expectancy_remove_pumps(mocker, edge_conf, fee,):
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_date': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_index': 4,
|
||||
'close_index': 4,
|
||||
'trade_duration': '',
|
||||
@ -449,8 +443,8 @@ def test_process_expectancy_remove_pumps(mocker, edge_conf, fee,):
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_date': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_index': 4,
|
||||
'close_index': 4,
|
||||
'trade_duration': '',
|
||||
@ -461,8 +455,8 @@ def test_process_expectancy_remove_pumps(mocker, edge_conf, fee,):
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_date': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_index': 4,
|
||||
'close_index': 4,
|
||||
'trade_duration': '',
|
||||
@ -473,8 +467,8 @@ def test_process_expectancy_remove_pumps(mocker, edge_conf, fee,):
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_date': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_index': 4,
|
||||
'close_index': 4,
|
||||
'trade_duration': '',
|
||||
@ -486,8 +480,8 @@ def test_process_expectancy_remove_pumps(mocker, edge_conf, fee,):
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:30:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:40:00.000000000'),
|
||||
'open_date': np.datetime64('2018-10-03T00:30:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:40:00.000000000'),
|
||||
'open_index': 6,
|
||||
'close_index': 7,
|
||||
'trade_duration': '',
|
||||
|
@ -395,5 +395,5 @@ def test_backtest_results(default_conf, fee, mocker, caplog, data) -> None:
|
||||
for c, trade in enumerate(data.trades):
|
||||
res = results.iloc[c]
|
||||
assert res.sell_reason == trade.sell_reason
|
||||
assert res.open_time == _get_frame_time_from_offset(trade.open_tick)
|
||||
assert res.close_time == _get_frame_time_from_offset(trade.close_tick)
|
||||
assert res.open_date == _get_frame_time_from_offset(trade.open_tick)
|
||||
assert res.close_date == _get_frame_time_from_offset(trade.close_tick)
|
||||
|
@ -354,8 +354,8 @@ def test_backtesting_start(default_conf, mocker, testdatadir, caplog) -> None:
|
||||
exists = [
|
||||
'Using stake_currency: BTC ...',
|
||||
'Using stake_amount: 0.001 ...',
|
||||
'Backtesting with data from 2017-11-14T21:17:00+00:00 '
|
||||
'up to 2017-11-14T22:59:00+00:00 (0 days)..'
|
||||
'Backtesting with data from 2017-11-14 21:17:00 '
|
||||
'up to 2017-11-14 22:59:00 (0 days)..'
|
||||
]
|
||||
for line in exists:
|
||||
assert log_has(line, caplog)
|
||||
@ -464,28 +464,29 @@ def test_backtest(default_conf, fee, mocker, testdatadir) -> None:
|
||||
{'pair': [pair, pair],
|
||||
'profit_percent': [0.0, 0.0],
|
||||
'profit_abs': [0.0, 0.0],
|
||||
'open_time': pd.to_datetime([Arrow(2018, 1, 29, 18, 40, 0).datetime,
|
||||
'open_date': pd.to_datetime([Arrow(2018, 1, 29, 18, 40, 0).datetime,
|
||||
Arrow(2018, 1, 30, 3, 30, 0).datetime], utc=True
|
||||
),
|
||||
'close_time': pd.to_datetime([Arrow(2018, 1, 29, 22, 35, 0).datetime,
|
||||
'open_rate': [0.104445, 0.10302485],
|
||||
'open_fee': [0.0025, 0.0025],
|
||||
'close_date': pd.to_datetime([Arrow(2018, 1, 29, 22, 35, 0).datetime,
|
||||
Arrow(2018, 1, 30, 4, 10, 0).datetime], utc=True),
|
||||
'open_index': [78, 184],
|
||||
'close_index': [125, 192],
|
||||
'close_rate': [0.104969, 0.103541],
|
||||
'close_fee': [0.0025, 0.0025],
|
||||
'amount': [0.00957442, 0.0097064],
|
||||
'trade_duration': [235, 40],
|
||||
'open_at_end': [False, False],
|
||||
'open_rate': [0.104445, 0.10302485],
|
||||
'close_rate': [0.104969, 0.103541],
|
||||
'sell_reason': [SellType.ROI, SellType.ROI]
|
||||
})
|
||||
pd.testing.assert_frame_equal(results, expected)
|
||||
data_pair = processed[pair]
|
||||
for _, t in results.iterrows():
|
||||
ln = data_pair.loc[data_pair["date"] == t["open_time"]]
|
||||
ln = data_pair.loc[data_pair["date"] == t["open_date"]]
|
||||
# Check open trade rate alignes to open rate
|
||||
assert ln is not None
|
||||
assert round(ln.iloc[0]["open"], 6) == round(t["open_rate"], 6)
|
||||
# check close trade rate alignes to close rate or is between high and low
|
||||
ln = data_pair.loc[data_pair["date"] == t["close_time"]]
|
||||
ln = data_pair.loc[data_pair["date"] == t["close_date"]]
|
||||
assert (round(ln.iloc[0]["open"], 6) == round(t["close_rate"], 6) or
|
||||
round(ln.iloc[0]["low"], 6) < round(
|
||||
t["close_rate"], 6) < round(ln.iloc[0]["high"], 6))
|
||||
@ -677,10 +678,10 @@ def test_backtest_start_timerange(default_conf, mocker, caplog, testdatadir):
|
||||
f'Using data directory: {testdatadir} ...',
|
||||
'Using stake_currency: BTC ...',
|
||||
'Using stake_amount: 0.001 ...',
|
||||
'Loading data from 2017-11-14T20:57:00+00:00 '
|
||||
'up to 2017-11-14T22:58:00+00:00 (0 days)..',
|
||||
'Backtesting with data from 2017-11-14T21:17:00+00:00 '
|
||||
'up to 2017-11-14T22:58:00+00:00 (0 days)..',
|
||||
'Loading data from 2017-11-14 20:57:00 '
|
||||
'up to 2017-11-14 22:58:00 (0 days)..',
|
||||
'Backtesting with data from 2017-11-14 21:17:00 '
|
||||
'up to 2017-11-14 22:58:00 (0 days)..',
|
||||
'Parameter --enable-position-stacking detected ...'
|
||||
]
|
||||
|
||||
@ -707,6 +708,7 @@ def test_backtest_start_multi_strat(default_conf, mocker, caplog, testdatadir):
|
||||
generate_pair_metrics=MagicMock(),
|
||||
generate_sell_reason_stats=sell_reason_mock,
|
||||
generate_strategy_metrics=strat_summary,
|
||||
generate_daily_stats=MagicMock(),
|
||||
)
|
||||
patched_configuration_load_config_file(mocker, default_conf)
|
||||
|
||||
@ -740,10 +742,10 @@ def test_backtest_start_multi_strat(default_conf, mocker, caplog, testdatadir):
|
||||
f'Using data directory: {testdatadir} ...',
|
||||
'Using stake_currency: BTC ...',
|
||||
'Using stake_amount: 0.001 ...',
|
||||
'Loading data from 2017-11-14T20:57:00+00:00 '
|
||||
'up to 2017-11-14T22:58:00+00:00 (0 days)..',
|
||||
'Backtesting with data from 2017-11-14T21:17:00+00:00 '
|
||||
'up to 2017-11-14T22:58:00+00:00 (0 days)..',
|
||||
'Loading data from 2017-11-14 20:57:00 '
|
||||
'up to 2017-11-14 22:58:00 (0 days)..',
|
||||
'Backtesting with data from 2017-11-14 21:17:00 '
|
||||
'up to 2017-11-14 22:58:00 (0 days)..',
|
||||
'Parameter --enable-position-stacking detected ...',
|
||||
'Running backtesting for Strategy DefaultStrategy',
|
||||
'Running backtesting for Strategy TestStrategyLegacy',
|
||||
@ -761,13 +763,11 @@ def test_backtest_start_multi_strat_nomock(default_conf, mocker, caplog, testdat
|
||||
pd.DataFrame({'pair': ['XRP/BTC', 'LTC/BTC'],
|
||||
'profit_percent': [0.0, 0.0],
|
||||
'profit_abs': [0.0, 0.0],
|
||||
'open_time': pd.to_datetime(['2018-01-29 18:40:00',
|
||||
'open_date': pd.to_datetime(['2018-01-29 18:40:00',
|
||||
'2018-01-30 03:30:00', ], utc=True
|
||||
),
|
||||
'close_time': pd.to_datetime(['2018-01-29 20:45:00',
|
||||
'close_date': pd.to_datetime(['2018-01-29 20:45:00',
|
||||
'2018-01-30 05:35:00', ], utc=True),
|
||||
'open_index': [78, 184],
|
||||
'close_index': [125, 192],
|
||||
'trade_duration': [235, 40],
|
||||
'open_at_end': [False, False],
|
||||
'open_rate': [0.104445, 0.10302485],
|
||||
@ -777,15 +777,13 @@ def test_backtest_start_multi_strat_nomock(default_conf, mocker, caplog, testdat
|
||||
pd.DataFrame({'pair': ['XRP/BTC', 'LTC/BTC', 'ETH/BTC'],
|
||||
'profit_percent': [0.03, 0.01, 0.1],
|
||||
'profit_abs': [0.01, 0.02, 0.2],
|
||||
'open_time': pd.to_datetime(['2018-01-29 18:40:00',
|
||||
'open_date': pd.to_datetime(['2018-01-29 18:40:00',
|
||||
'2018-01-30 03:30:00',
|
||||
'2018-01-30 05:30:00'], utc=True
|
||||
),
|
||||
'close_time': pd.to_datetime(['2018-01-29 20:45:00',
|
||||
'close_date': pd.to_datetime(['2018-01-29 20:45:00',
|
||||
'2018-01-30 05:35:00',
|
||||
'2018-01-30 08:30:00'], utc=True),
|
||||
'open_index': [78, 184, 185],
|
||||
'close_index': [125, 224, 205],
|
||||
'trade_duration': [47, 40, 20],
|
||||
'open_at_end': [False, False, False],
|
||||
'open_rate': [0.104445, 0.10302485, 0.122541],
|
||||
@ -823,10 +821,10 @@ def test_backtest_start_multi_strat_nomock(default_conf, mocker, caplog, testdat
|
||||
f'Using data directory: {testdatadir} ...',
|
||||
'Using stake_currency: BTC ...',
|
||||
'Using stake_amount: 0.001 ...',
|
||||
'Loading data from 2017-11-14T20:57:00+00:00 '
|
||||
'up to 2017-11-14T22:58:00+00:00 (0 days)..',
|
||||
'Backtesting with data from 2017-11-14T21:17:00+00:00 '
|
||||
'up to 2017-11-14T22:58:00+00:00 (0 days)..',
|
||||
'Loading data from 2017-11-14 20:57:00 '
|
||||
'up to 2017-11-14 22:58:00 (0 days)..',
|
||||
'Backtesting with data from 2017-11-14 21:17:00 '
|
||||
'up to 2017-11-14 22:58:00 (0 days)..',
|
||||
'Parameter --enable-position-stacking detected ...',
|
||||
'Running backtesting for Strategy DefaultStrategy',
|
||||
'Running backtesting for Strategy TestStrategyLegacy',
|
||||
|
@ -59,7 +59,7 @@ def hyperopt_results():
|
||||
'profit_abs': [-0.2, 0.4, 0.6],
|
||||
'trade_duration': [10, 30, 10],
|
||||
'sell_reason': [SellType.STOP_LOSS, SellType.ROI, SellType.ROI],
|
||||
'close_time':
|
||||
'close_date':
|
||||
[
|
||||
datetime(2019, 1, 1, 9, 26, 3, 478039),
|
||||
datetime(2019, 2, 1, 9, 26, 3, 478039),
|
||||
|
@ -1,16 +1,29 @@
|
||||
import re
|
||||
from datetime import timedelta
|
||||
from pathlib import Path
|
||||
|
||||
import pandas as pd
|
||||
import pytest
|
||||
from arrow import Arrow
|
||||
|
||||
from freqtrade.configuration import TimeRange
|
||||
from freqtrade.constants import LAST_BT_RESULT_FN
|
||||
from freqtrade.data import history
|
||||
from freqtrade.data.btanalysis import (get_latest_backtest_filename,
|
||||
load_backtest_data)
|
||||
from freqtrade.edge import PairInfo
|
||||
from freqtrade.optimize.optimize_reports import (
|
||||
generate_pair_metrics, generate_edge_table, generate_sell_reason_stats,
|
||||
text_table_bt_results, text_table_sell_reason, generate_strategy_metrics,
|
||||
text_table_strategy, store_backtest_result)
|
||||
from freqtrade.optimize.optimize_reports import (generate_backtest_stats,
|
||||
generate_daily_stats,
|
||||
generate_edge_table,
|
||||
generate_pair_metrics,
|
||||
generate_sell_reason_stats,
|
||||
generate_strategy_metrics,
|
||||
store_backtest_stats,
|
||||
text_table_bt_results,
|
||||
text_table_sell_reason,
|
||||
text_table_strategy)
|
||||
from freqtrade.strategy.interface import SellType
|
||||
from tests.conftest import patch_exchange
|
||||
from tests.data.test_history import _backup_file, _clean_test_file
|
||||
|
||||
|
||||
def test_text_table_bt_results(default_conf, mocker):
|
||||
@ -43,6 +56,115 @@ def test_text_table_bt_results(default_conf, mocker):
|
||||
assert text_table_bt_results(pair_results, stake_currency='BTC') == result_str
|
||||
|
||||
|
||||
def test_generate_backtest_stats(default_conf, testdatadir):
|
||||
results = {'DefStrat': pd.DataFrame({"pair": ["UNITTEST/BTC", "UNITTEST/BTC",
|
||||
"UNITTEST/BTC", "UNITTEST/BTC"],
|
||||
"profit_percent": [0.003312, 0.010801, 0.013803, 0.002780],
|
||||
"profit_abs": [0.000003, 0.000011, 0.000014, 0.000003],
|
||||
"open_date": [Arrow(2017, 11, 14, 19, 32, 00).datetime,
|
||||
Arrow(2017, 11, 14, 21, 36, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 12, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 44, 00).datetime],
|
||||
"close_date": [Arrow(2017, 11, 14, 21, 35, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 10, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 43, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 58, 00).datetime],
|
||||
"open_rate": [0.002543, 0.003003, 0.003089, 0.003214],
|
||||
"close_rate": [0.002546, 0.003014, 0.003103, 0.003217],
|
||||
"trade_duration": [123, 34, 31, 14],
|
||||
"open_at_end": [False, False, False, True],
|
||||
"sell_reason": [SellType.ROI, SellType.STOP_LOSS,
|
||||
SellType.ROI, SellType.FORCE_SELL]
|
||||
})}
|
||||
timerange = TimeRange.parse_timerange('1510688220-1510700340')
|
||||
min_date = Arrow.fromtimestamp(1510688220)
|
||||
max_date = Arrow.fromtimestamp(1510700340)
|
||||
btdata = history.load_data(testdatadir, '1m', ['UNITTEST/BTC'], timerange=timerange,
|
||||
fill_up_missing=True)
|
||||
|
||||
stats = generate_backtest_stats(default_conf, btdata, results, min_date, max_date)
|
||||
assert isinstance(stats, dict)
|
||||
assert 'strategy' in stats
|
||||
assert 'DefStrat' in stats['strategy']
|
||||
assert 'strategy_comparison' in stats
|
||||
strat_stats = stats['strategy']['DefStrat']
|
||||
assert strat_stats['backtest_start'] == min_date.datetime
|
||||
assert strat_stats['backtest_end'] == max_date.datetime
|
||||
assert strat_stats['total_trades'] == len(results['DefStrat'])
|
||||
# Above sample had no loosing trade
|
||||
assert strat_stats['max_drawdown'] == 0.0
|
||||
|
||||
results = {'DefStrat': pd.DataFrame(
|
||||
{"pair": ["UNITTEST/BTC", "UNITTEST/BTC", "UNITTEST/BTC", "UNITTEST/BTC"],
|
||||
"profit_percent": [0.003312, 0.010801, -0.013803, 0.002780],
|
||||
"profit_abs": [0.000003, 0.000011, -0.000014, 0.000003],
|
||||
"open_date": [Arrow(2017, 11, 14, 19, 32, 00).datetime,
|
||||
Arrow(2017, 11, 14, 21, 36, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 12, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 44, 00).datetime],
|
||||
"close_date": [Arrow(2017, 11, 14, 21, 35, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 10, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 43, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 58, 00).datetime],
|
||||
"open_rate": [0.002543, 0.003003, 0.003089, 0.003214],
|
||||
"close_rate": [0.002546, 0.003014, 0.0032903, 0.003217],
|
||||
"trade_duration": [123, 34, 31, 14],
|
||||
"open_at_end": [False, False, False, True],
|
||||
"sell_reason": [SellType.ROI, SellType.STOP_LOSS,
|
||||
SellType.ROI, SellType.FORCE_SELL]
|
||||
})}
|
||||
|
||||
assert strat_stats['max_drawdown'] == 0.0
|
||||
assert strat_stats['drawdown_start'] == Arrow.fromtimestamp(0).datetime
|
||||
assert strat_stats['drawdown_end'] == Arrow.fromtimestamp(0).datetime
|
||||
assert strat_stats['drawdown_end_ts'] == 0
|
||||
assert strat_stats['drawdown_start_ts'] == 0
|
||||
assert strat_stats['pairlist'] == ['UNITTEST/BTC']
|
||||
|
||||
# Test storing stats
|
||||
filename = Path(testdatadir / 'btresult.json')
|
||||
filename_last = Path(testdatadir / LAST_BT_RESULT_FN)
|
||||
_backup_file(filename_last, copy_file=True)
|
||||
assert not filename.is_file()
|
||||
|
||||
store_backtest_stats(filename, stats)
|
||||
|
||||
# get real Filename (it's btresult-<date>.json)
|
||||
last_fn = get_latest_backtest_filename(filename_last.parent)
|
||||
assert re.match(r"btresult-.*\.json", last_fn)
|
||||
|
||||
filename1 = (testdatadir / last_fn)
|
||||
assert filename1.is_file()
|
||||
content = filename1.read_text()
|
||||
assert 'max_drawdown' in content
|
||||
assert 'strategy' in content
|
||||
assert 'pairlist' in content
|
||||
|
||||
assert filename_last.is_file()
|
||||
|
||||
_clean_test_file(filename_last)
|
||||
filename1.unlink()
|
||||
|
||||
|
||||
def test_store_backtest_stats(testdatadir, mocker):
|
||||
|
||||
dump_mock = mocker.patch('freqtrade.optimize.optimize_reports.file_dump_json')
|
||||
|
||||
store_backtest_stats(testdatadir, {})
|
||||
|
||||
assert dump_mock.call_count == 2
|
||||
assert isinstance(dump_mock.call_args_list[0][0][0], Path)
|
||||
assert str(dump_mock.call_args_list[0][0][0]).startswith(str(testdatadir/'backtest-result'))
|
||||
|
||||
dump_mock.reset_mock()
|
||||
filename = testdatadir / 'testresult.json'
|
||||
store_backtest_stats(filename, {})
|
||||
assert dump_mock.call_count == 2
|
||||
assert isinstance(dump_mock.call_args_list[0][0][0], Path)
|
||||
# result will be testdatadir / testresult-<timestamp>.json
|
||||
assert str(dump_mock.call_args_list[0][0][0]).startswith(str(testdatadir / 'testresult'))
|
||||
|
||||
|
||||
def test_generate_pair_metrics(default_conf, mocker):
|
||||
|
||||
results = pd.DataFrame(
|
||||
@ -68,6 +190,21 @@ def test_generate_pair_metrics(default_conf, mocker):
|
||||
pytest.approx(pair_results[-1]['profit_sum_pct']) == pair_results[-1]['profit_sum'] * 100)
|
||||
|
||||
|
||||
def test_generate_daily_stats(testdatadir):
|
||||
|
||||
filename = testdatadir / "backtest-result_new.json"
|
||||
bt_data = load_backtest_data(filename)
|
||||
res = generate_daily_stats(bt_data)
|
||||
assert isinstance(res, dict)
|
||||
assert round(res['backtest_best_day'], 4) == 0.1796
|
||||
assert round(res['backtest_worst_day'], 4) == -0.1468
|
||||
assert res['winning_days'] == 14
|
||||
assert res['draw_days'] == 4
|
||||
assert res['losing_days'] == 3
|
||||
assert res['winner_holding_avg'] == timedelta(seconds=1440)
|
||||
assert res['loser_holding_avg'] == timedelta(days=1, seconds=21420)
|
||||
|
||||
|
||||
def test_text_table_sell_reason(default_conf):
|
||||
|
||||
results = pd.DataFrame(
|
||||
@ -188,77 +325,3 @@ def test_generate_edge_table(edge_conf, mocker):
|
||||
assert generate_edge_table(results).count('| ETH/BTC |') == 1
|
||||
assert generate_edge_table(results).count(
|
||||
'| Risk Reward Ratio | Required Risk Reward | Expectancy |') == 1
|
||||
|
||||
|
||||
def test_backtest_record(default_conf, fee, mocker):
|
||||
names = []
|
||||
records = []
|
||||
patch_exchange(mocker)
|
||||
mocker.patch('freqtrade.exchange.Exchange.get_fee', fee)
|
||||
mocker.patch(
|
||||
'freqtrade.optimize.optimize_reports.file_dump_json',
|
||||
new=lambda n, r: (names.append(n), records.append(r))
|
||||
)
|
||||
|
||||
results = {'DefStrat': pd.DataFrame({"pair": ["UNITTEST/BTC", "UNITTEST/BTC",
|
||||
"UNITTEST/BTC", "UNITTEST/BTC"],
|
||||
"profit_percent": [0.003312, 0.010801, 0.013803, 0.002780],
|
||||
"profit_abs": [0.000003, 0.000011, 0.000014, 0.000003],
|
||||
"open_time": [Arrow(2017, 11, 14, 19, 32, 00).datetime,
|
||||
Arrow(2017, 11, 14, 21, 36, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 12, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 44, 00).datetime],
|
||||
"close_time": [Arrow(2017, 11, 14, 21, 35, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 10, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 43, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 58, 00).datetime],
|
||||
"open_rate": [0.002543, 0.003003, 0.003089, 0.003214],
|
||||
"close_rate": [0.002546, 0.003014, 0.003103, 0.003217],
|
||||
"open_index": [1, 119, 153, 185],
|
||||
"close_index": [118, 151, 184, 199],
|
||||
"trade_duration": [123, 34, 31, 14],
|
||||
"open_at_end": [False, False, False, True],
|
||||
"sell_reason": [SellType.ROI, SellType.STOP_LOSS,
|
||||
SellType.ROI, SellType.FORCE_SELL]
|
||||
})}
|
||||
store_backtest_result(Path("backtest-result.json"), results)
|
||||
# Assert file_dump_json was only called once
|
||||
assert names == [Path('backtest-result.json')]
|
||||
records = records[0]
|
||||
# Ensure records are of correct type
|
||||
assert len(records) == 4
|
||||
|
||||
# reset test to test with strategy name
|
||||
names = []
|
||||
records = []
|
||||
results['Strat'] = results['DefStrat']
|
||||
results['Strat2'] = results['DefStrat']
|
||||
store_backtest_result(Path("backtest-result.json"), results)
|
||||
assert names == [
|
||||
Path('backtest-result-DefStrat.json'),
|
||||
Path('backtest-result-Strat.json'),
|
||||
Path('backtest-result-Strat2.json'),
|
||||
]
|
||||
records = records[0]
|
||||
# Ensure records are of correct type
|
||||
assert len(records) == 4
|
||||
|
||||
# ('UNITTEST/BTC', 0.00331158, '1510684320', '1510691700', 0, 117)
|
||||
# Below follows just a typecheck of the schema/type of trade-records
|
||||
oix = None
|
||||
for (pair, profit, date_buy, date_sell, buy_index, dur,
|
||||
openr, closer, open_at_end, sell_reason) in records:
|
||||
assert pair == 'UNITTEST/BTC'
|
||||
assert isinstance(profit, float)
|
||||
# FIX: buy/sell should be converted to ints
|
||||
assert isinstance(date_buy, float)
|
||||
assert isinstance(date_sell, float)
|
||||
assert isinstance(openr, float)
|
||||
assert isinstance(closer, float)
|
||||
assert isinstance(open_at_end, bool)
|
||||
assert isinstance(sell_reason, str)
|
||||
isinstance(buy_index, pd._libs.tslib.Timestamp)
|
||||
if oix:
|
||||
assert buy_index > oix
|
||||
oix = buy_index
|
||||
assert dur > 0
|
||||
|
@ -320,7 +320,7 @@ def test_edge_overrides_stoploss(limit_buy_order, fee, caplog, mocker, edge_conf
|
||||
|
||||
# stoploss shoud be hit
|
||||
assert freqtrade.handle_trade(trade) is True
|
||||
assert log_has('Executing Sell for NEO/BTC. Reason: SellType.STOP_LOSS', caplog)
|
||||
assert log_has('Executing Sell for NEO/BTC. Reason: stop_loss', caplog)
|
||||
assert trade.sell_reason == SellType.STOP_LOSS.value
|
||||
|
||||
|
||||
|
@ -267,7 +267,7 @@ def test_generate_profit_graph(testdatadir):
|
||||
trades = load_backtest_data(filename)
|
||||
timerange = TimeRange.parse_timerange("20180110-20180112")
|
||||
pairs = ["TRX/BTC", "XLM/BTC"]
|
||||
trades = trades[trades['close_time'] < pd.Timestamp('2018-01-12', tz='UTC')]
|
||||
trades = trades[trades['close_date'] < pd.Timestamp('2018-01-12', tz='UTC')]
|
||||
|
||||
data = history.load_data(datadir=testdatadir,
|
||||
pairs=pairs,
|
||||
|
1
tests/testdata/.last_result.json
vendored
Normal file
1
tests/testdata/.last_result.json
vendored
Normal file
@ -0,0 +1 @@
|
||||
{"latest_backtest":"backtest-result_new.json"}
|
1
tests/testdata/backtest-result_multistrat.json
vendored
Normal file
1
tests/testdata/backtest-result_multistrat.json
vendored
Normal file
File diff suppressed because one or more lines are too long
1
tests/testdata/backtest-result_new.json
vendored
Normal file
1
tests/testdata/backtest-result_new.json
vendored
Normal file
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
Loading…
Reference in New Issue
Block a user