Merge branch 'develop' into backtest_live_models
This commit is contained in:
commit
52b60c5cbb
@ -17,7 +17,7 @@ repos:
|
||||
- types-filelock==3.2.7
|
||||
- types-requests==2.28.11.2
|
||||
- types-tabulate==0.9.0.0
|
||||
- types-python-dateutil==2.8.19
|
||||
- types-python-dateutil==2.8.19.1
|
||||
# stages: [push]
|
||||
|
||||
- repo: https://github.com/pycqa/isort
|
||||
|
@ -37,6 +37,7 @@ Mandatory parameters are marked as **Required** and have to be set in one of the
|
||||
| `noise_standard_deviation` | If set, FreqAI adds noise to the training features with the aim of preventing overfitting. FreqAI generates random deviates from a gaussian distribution with a standard deviation of `noise_standard_deviation` and adds them to all data points. `noise_standard_deviation` should be kept relative to the normalized space, i.e., between -1 and 1. In other words, since data in FreqAI is always normalized to be between -1 and 1, `noise_standard_deviation: 0.05` would result in 32% of the data being randomly increased/decreased by more than 2.5% (i.e., the percent of data falling within the first standard deviation). <br> **Datatype:** Integer. <br> Default: `0`.
|
||||
| `outlier_protection_percentage` | Enable to prevent outlier detection methods from discarding too much data. If more than `outlier_protection_percentage` % of points are detected as outliers by the SVM or DBSCAN, FreqAI will log a warning message and ignore outlier detection, i.e., the original dataset will be kept intact. If the outlier protection is triggered, no predictions will be made based on the training dataset. <br> **Datatype:** Float. <br> Default: `30`.
|
||||
| `reverse_train_test_order` | Split the feature dataset (see below) and use the latest data split for training and test on historical split of the data. This allows the model to be trained up to the most recent data point, while avoiding overfitting. However, you should be careful to understand the unorthodox nature of this parameter before employing it. <br> **Datatype:** Boolean. <br> Default: `False` (no reversal).
|
||||
| `write_metrics_to_disk` | Collect train timings, inference timings and cpu usage in json file. <br> **Datatype:** Boolean. <br> Default: `False`
|
||||
| | **Data split parameters**
|
||||
| `data_split_parameters` | Include any additional parameters available from Scikit-learn `test_train_split()`, which are shown [here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) (external website). <br> **Datatype:** Dictionary.
|
||||
| `test_size` | The fraction of data that should be used for testing instead of training. <br> **Datatype:** Positive float < 1.
|
||||
|
@ -1,5 +1,5 @@
|
||||
markdown==3.3.7
|
||||
mkdocs==1.4.0
|
||||
mkdocs==1.4.1
|
||||
mkdocs-material==8.5.6
|
||||
mdx_truly_sane_lists==1.3
|
||||
pymdown-extensions==9.6
|
||||
|
@ -87,7 +87,7 @@ At this stage the bot contains the following stoploss support modes:
|
||||
2. Trailing stop loss.
|
||||
3. Trailing stop loss, custom positive loss.
|
||||
4. Trailing stop loss only once the trade has reached a certain offset.
|
||||
5. [Custom stoploss function](strategy-advanced.md#custom-stoploss)
|
||||
5. [Custom stoploss function](strategy-callbacks.md#custom-stoploss)
|
||||
|
||||
### Static Stop Loss
|
||||
|
||||
|
@ -659,9 +659,9 @@ informative = self.dp.get_pair_dataframe(pair=inf_pair,
|
||||
```
|
||||
|
||||
!!! Warning "Warning about backtesting"
|
||||
Be careful when using dataprovider in backtesting. `historic_ohlcv()` (and `get_pair_dataframe()`
|
||||
for the backtesting runmode) provides the full time-range in one go,
|
||||
so please be aware of it and make sure to not "look into the future" to avoid surprises when running in dry/live mode.
|
||||
In backtesting, `dp.get_pair_dataframe()` behavior differs depending on where it's called.
|
||||
Within `populate_*()` methods, `dp.get_pair_dataframe()` returns the full timerange. Please make sure to not "look into the future" to avoid surprises when running in dry/live mode.
|
||||
Within [callbacks](strategy-callbacks.md), you'll get the full timerange up to the current (simulated) candle.
|
||||
|
||||
### *get_analyzed_dataframe(pair, timeframe)*
|
||||
|
||||
@ -670,13 +670,13 @@ It can also be used in specific callbacks to get the signal that caused the acti
|
||||
|
||||
``` python
|
||||
# fetch current dataframe
|
||||
if self.dp.runmode.value in ('live', 'dry_run'):
|
||||
dataframe, last_updated = self.dp.get_analyzed_dataframe(pair=metadata['pair'],
|
||||
dataframe, last_updated = self.dp.get_analyzed_dataframe(pair=metadata['pair'],
|
||||
timeframe=self.timeframe)
|
||||
```
|
||||
|
||||
!!! Note "No data available"
|
||||
Returns an empty dataframe if the requested pair was not cached.
|
||||
You can check for this with `if dataframe.empty:` and handle this case accordingly.
|
||||
This should not happen when using whitelisted pairs.
|
||||
|
||||
### *orderbook(pair, maximum)*
|
||||
|
@ -169,6 +169,43 @@ Example: Search dedicated strategy path.
|
||||
freqtrade list-strategies --strategy-path ~/.freqtrade/strategies/
|
||||
```
|
||||
|
||||
## List freqAI models
|
||||
|
||||
Use the `list-freqaimodels` subcommand to see all freqAI models available.
|
||||
|
||||
This subcommand is useful for finding problems in your environment with loading freqAI models: modules with models that contain errors and failed to load are printed in red (LOAD FAILED), while models with duplicate names are printed in yellow (DUPLICATE NAME).
|
||||
|
||||
```
|
||||
usage: freqtrade list-freqaimodels [-h] [-v] [--logfile FILE] [-V] [-c PATH]
|
||||
[-d PATH] [--userdir PATH]
|
||||
[--freqaimodel-path PATH] [-1] [--no-color]
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--freqaimodel-path PATH
|
||||
Specify additional lookup path for freqaimodels.
|
||||
-1, --one-column Print output in one column.
|
||||
--no-color Disable colorization of hyperopt results. May be
|
||||
useful if you are redirecting output to a file.
|
||||
|
||||
Common arguments:
|
||||
-v, --verbose Verbose mode (-vv for more, -vvv to get all messages).
|
||||
--logfile FILE Log to the file specified. Special values are:
|
||||
'syslog', 'journald'. See the documentation for more
|
||||
details.
|
||||
-V, --version show program's version number and exit
|
||||
-c PATH, --config PATH
|
||||
Specify configuration file (default:
|
||||
`userdir/config.json` or `config.json` whichever
|
||||
exists). Multiple --config options may be used. Can be
|
||||
set to `-` to read config from stdin.
|
||||
-d PATH, --datadir PATH, --data-dir PATH
|
||||
Path to directory with historical backtesting data.
|
||||
--userdir PATH, --user-data-dir PATH
|
||||
Path to userdata directory.
|
||||
|
||||
```
|
||||
|
||||
## List Exchanges
|
||||
|
||||
Use the `list-exchanges` subcommand to see the exchanges available for the bot.
|
||||
|
@ -15,9 +15,9 @@ from freqtrade.commands.db_commands import start_convert_db
|
||||
from freqtrade.commands.deploy_commands import (start_create_userdir, start_install_ui,
|
||||
start_new_strategy)
|
||||
from freqtrade.commands.hyperopt_commands import start_hyperopt_list, start_hyperopt_show
|
||||
from freqtrade.commands.list_commands import (start_list_exchanges, start_list_markets,
|
||||
start_list_strategies, start_list_timeframes,
|
||||
start_show_trades)
|
||||
from freqtrade.commands.list_commands import (start_list_exchanges, start_list_freqAI_models,
|
||||
start_list_markets, start_list_strategies,
|
||||
start_list_timeframes, start_show_trades)
|
||||
from freqtrade.commands.optimize_commands import (start_backtesting, start_backtesting_show,
|
||||
start_edge, start_hyperopt)
|
||||
from freqtrade.commands.pairlist_commands import start_test_pairlist
|
||||
|
@ -42,6 +42,8 @@ ARGS_EDGE = ARGS_COMMON_OPTIMIZE + ["stoploss_range"]
|
||||
ARGS_LIST_STRATEGIES = ["strategy_path", "print_one_column", "print_colorized",
|
||||
"recursive_strategy_search"]
|
||||
|
||||
ARGS_LIST_FREQAIMODELS = ["freqaimodel_path", "print_one_column", "print_colorized"]
|
||||
|
||||
ARGS_LIST_HYPEROPTS = ["hyperopt_path", "print_one_column", "print_colorized"]
|
||||
|
||||
ARGS_BACKTEST_SHOW = ["exportfilename", "backtest_show_pair_list"]
|
||||
@ -107,8 +109,8 @@ ARGS_ANALYZE_ENTRIES_EXITS = ["exportfilename", "analysis_groups", "enter_reason
|
||||
"exit_reason_list", "indicator_list"]
|
||||
|
||||
NO_CONF_REQURIED = ["convert-data", "convert-trade-data", "download-data", "list-timeframes",
|
||||
"list-markets", "list-pairs", "list-strategies", "list-data",
|
||||
"hyperopt-list", "hyperopt-show", "backtest-filter",
|
||||
"list-markets", "list-pairs", "list-strategies", "list-freqaimodels",
|
||||
"list-data", "hyperopt-list", "hyperopt-show", "backtest-filter",
|
||||
"plot-dataframe", "plot-profit", "show-trades", "trades-to-ohlcv"]
|
||||
|
||||
NO_CONF_ALLOWED = ["create-userdir", "list-exchanges", "new-strategy"]
|
||||
@ -193,10 +195,11 @@ class Arguments:
|
||||
start_create_userdir, start_download_data, start_edge,
|
||||
start_hyperopt, start_hyperopt_list, start_hyperopt_show,
|
||||
start_install_ui, start_list_data, start_list_exchanges,
|
||||
start_list_markets, start_list_strategies,
|
||||
start_list_timeframes, start_new_config, start_new_strategy,
|
||||
start_plot_dataframe, start_plot_profit, start_show_trades,
|
||||
start_test_pairlist, start_trading, start_webserver)
|
||||
start_list_freqAI_models, start_list_markets,
|
||||
start_list_strategies, start_list_timeframes,
|
||||
start_new_config, start_new_strategy, start_plot_dataframe,
|
||||
start_plot_profit, start_show_trades, start_test_pairlist,
|
||||
start_trading, start_webserver)
|
||||
|
||||
subparsers = self.parser.add_subparsers(dest='command',
|
||||
# Use custom message when no subhandler is added
|
||||
@ -363,6 +366,15 @@ class Arguments:
|
||||
list_strategies_cmd.set_defaults(func=start_list_strategies)
|
||||
self._build_args(optionlist=ARGS_LIST_STRATEGIES, parser=list_strategies_cmd)
|
||||
|
||||
# Add list-freqAI Models subcommand
|
||||
list_freqaimodels_cmd = subparsers.add_parser(
|
||||
'list-freqaimodels',
|
||||
help='Print available freqAI models.',
|
||||
parents=[_common_parser],
|
||||
)
|
||||
list_freqaimodels_cmd.set_defaults(func=start_list_freqAI_models)
|
||||
self._build_args(optionlist=ARGS_LIST_FREQAIMODELS, parser=list_freqaimodels_cmd)
|
||||
|
||||
# Add list-timeframes subcommand
|
||||
list_timeframes_cmd = subparsers.add_parser(
|
||||
'list-timeframes',
|
||||
|
@ -1,7 +1,6 @@
|
||||
import csv
|
||||
import logging
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List
|
||||
|
||||
import rapidjson
|
||||
@ -10,7 +9,6 @@ from colorama import init as colorama_init
|
||||
from tabulate import tabulate
|
||||
|
||||
from freqtrade.configuration import setup_utils_configuration
|
||||
from freqtrade.constants import USERPATH_STRATEGIES
|
||||
from freqtrade.enums import RunMode
|
||||
from freqtrade.exceptions import OperationalException
|
||||
from freqtrade.exchange import market_is_active, validate_exchanges
|
||||
@ -41,7 +39,7 @@ def start_list_exchanges(args: Dict[str, Any]) -> None:
|
||||
print(tabulate(exchanges, headers=['Exchange name', 'Valid', 'reason']))
|
||||
|
||||
|
||||
def _print_objs_tabular(objs: List, print_colorized: bool, base_dir: Path) -> None:
|
||||
def _print_objs_tabular(objs: List, print_colorized: bool) -> None:
|
||||
if print_colorized:
|
||||
colorama_init(autoreset=True)
|
||||
red = Fore.RED
|
||||
@ -55,7 +53,7 @@ def _print_objs_tabular(objs: List, print_colorized: bool, base_dir: Path) -> No
|
||||
names = [s['name'] for s in objs]
|
||||
objs_to_print = [{
|
||||
'name': s['name'] if s['name'] else "--",
|
||||
'location': s['location'].relative_to(base_dir),
|
||||
'location': s['location_rel'],
|
||||
'status': (red + "LOAD FAILED" + reset if s['class'] is None
|
||||
else "OK" if names.count(s['name']) == 1
|
||||
else yellow + "DUPLICATE NAME" + reset)
|
||||
@ -76,9 +74,8 @@ def start_list_strategies(args: Dict[str, Any]) -> None:
|
||||
"""
|
||||
config = setup_utils_configuration(args, RunMode.UTIL_NO_EXCHANGE)
|
||||
|
||||
directory = Path(config.get('strategy_path', config['user_data_dir'] / USERPATH_STRATEGIES))
|
||||
strategy_objs = StrategyResolver.search_all_objects(
|
||||
directory, not args['print_one_column'], config.get('recursive_strategy_search', False))
|
||||
config, not args['print_one_column'], config.get('recursive_strategy_search', False))
|
||||
# Sort alphabetically
|
||||
strategy_objs = sorted(strategy_objs, key=lambda x: x['name'])
|
||||
for obj in strategy_objs:
|
||||
@ -90,7 +87,22 @@ def start_list_strategies(args: Dict[str, Any]) -> None:
|
||||
if args['print_one_column']:
|
||||
print('\n'.join([s['name'] for s in strategy_objs]))
|
||||
else:
|
||||
_print_objs_tabular(strategy_objs, config.get('print_colorized', False), directory)
|
||||
_print_objs_tabular(strategy_objs, config.get('print_colorized', False))
|
||||
|
||||
|
||||
def start_list_freqAI_models(args: Dict[str, Any]) -> None:
|
||||
"""
|
||||
Print files with FreqAI models custom classes available in the directory
|
||||
"""
|
||||
config = setup_utils_configuration(args, RunMode.UTIL_NO_EXCHANGE)
|
||||
from freqtrade.resolvers.freqaimodel_resolver import FreqaiModelResolver
|
||||
model_objs = FreqaiModelResolver.search_all_objects(config, not args['print_one_column'])
|
||||
# Sort alphabetically
|
||||
model_objs = sorted(model_objs, key=lambda x: x['name'])
|
||||
if args['print_one_column']:
|
||||
print('\n'.join([s['name'] for s in model_objs]))
|
||||
else:
|
||||
_print_objs_tabular(model_objs, config.get('print_colorized', False))
|
||||
|
||||
|
||||
def start_list_timeframes(args: Dict[str, Any]) -> None:
|
||||
|
@ -540,6 +540,8 @@ CONF_SCHEMA = {
|
||||
"properties": {
|
||||
"enabled": {"type": "boolean", "default": False},
|
||||
"keras": {"type": "boolean", "default": False},
|
||||
"write_metrics_to_disk": {"type": "boolean", "default": False},
|
||||
"purge_old_models": {"type": "boolean", "default": True},
|
||||
"conv_width": {"type": "integer", "default": 2},
|
||||
"train_period_days": {"type": "integer", "default": 0},
|
||||
"backtest_period_days": {"type": "number", "default": 7},
|
||||
|
@ -410,11 +410,13 @@ class Exchange:
|
||||
else:
|
||||
return DataFrame()
|
||||
|
||||
def get_contract_size(self, pair: str) -> float:
|
||||
def get_contract_size(self, pair: str) -> Optional[float]:
|
||||
if self.trading_mode == TradingMode.FUTURES:
|
||||
market = self.markets[pair]
|
||||
market = self.markets.get(pair, {})
|
||||
contract_size: float = 1.0
|
||||
if market['contractSize'] is not None:
|
||||
if not market:
|
||||
return None
|
||||
if market.get('contractSize') is not None:
|
||||
# ccxt has contractSize in markets as string
|
||||
contract_size = float(market['contractSize'])
|
||||
return contract_size
|
||||
@ -1934,6 +1936,7 @@ class Exchange:
|
||||
candle_limit = self.ohlcv_candle_limit(timeframe, self._config['candle_type_def'])
|
||||
# Age out old candles
|
||||
ohlcv_df = ohlcv_df.tail(candle_limit + self._startup_candle_count)
|
||||
ohlcv_df = ohlcv_df.reset_index(drop=True)
|
||||
self._klines[(pair, timeframe, c_type)] = ohlcv_df
|
||||
else:
|
||||
self._klines[(pair, timeframe, c_type)] = ohlcv_df
|
||||
|
@ -1,14 +1,15 @@
|
||||
import collections
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
import shutil
|
||||
import threading
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, Tuple, TypedDict
|
||||
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
import psutil
|
||||
import rapidjson
|
||||
from joblib import dump, load
|
||||
from joblib.externals import cloudpickle
|
||||
@ -65,6 +66,8 @@ class FreqaiDataDrawer:
|
||||
self.pair_dict: Dict[str, pair_info] = {}
|
||||
# dictionary holding all actively inferenced models in memory given a model filename
|
||||
self.model_dictionary: Dict[str, Any] = {}
|
||||
# all additional metadata that we want to keep in ram
|
||||
self.meta_data_dictionary: Dict[str, Dict[str, Any]] = {}
|
||||
self.model_return_values: Dict[str, DataFrame] = {}
|
||||
self.historic_data: Dict[str, Dict[str, DataFrame]] = {}
|
||||
self.historic_predictions: Dict[str, DataFrame] = {}
|
||||
@ -78,30 +81,60 @@ class FreqaiDataDrawer:
|
||||
self.historic_predictions_bkp_path = Path(
|
||||
self.full_path / "historic_predictions.backup.pkl")
|
||||
self.pair_dictionary_path = Path(self.full_path / "pair_dictionary.json")
|
||||
self.metric_tracker_path = Path(self.full_path / "metric_tracker.json")
|
||||
self.follow_mode = follow_mode
|
||||
if follow_mode:
|
||||
self.create_follower_dict()
|
||||
self.load_drawer_from_disk()
|
||||
self.load_historic_predictions_from_disk()
|
||||
self.load_metric_tracker_from_disk()
|
||||
self.training_queue: Dict[str, int] = {}
|
||||
self.history_lock = threading.Lock()
|
||||
self.save_lock = threading.Lock()
|
||||
self.pair_dict_lock = threading.Lock()
|
||||
self.metric_tracker_lock = threading.Lock()
|
||||
self.old_DBSCAN_eps: Dict[str, float] = {}
|
||||
self.empty_pair_dict: pair_info = {
|
||||
"model_filename": "", "trained_timestamp": 0,
|
||||
"data_path": "", "extras": {}}
|
||||
self.metric_tracker: Dict[str, Dict[str, Dict[str, list]]] = {}
|
||||
|
||||
def update_metric_tracker(self, metric: str, value: float, pair: str) -> None:
|
||||
"""
|
||||
General utility for adding and updating custom metrics. Typically used
|
||||
for adding training performance, train timings, inferenc timings, cpu loads etc.
|
||||
"""
|
||||
with self.metric_tracker_lock:
|
||||
if pair not in self.metric_tracker:
|
||||
self.metric_tracker[pair] = {}
|
||||
if metric not in self.metric_tracker[pair]:
|
||||
self.metric_tracker[pair][metric] = {'timestamp': [], 'value': []}
|
||||
|
||||
timestamp = int(datetime.now(timezone.utc).timestamp())
|
||||
self.metric_tracker[pair][metric]['value'].append(value)
|
||||
self.metric_tracker[pair][metric]['timestamp'].append(timestamp)
|
||||
|
||||
def collect_metrics(self, time_spent: float, pair: str):
|
||||
"""
|
||||
Add metrics to the metric tracker dictionary
|
||||
"""
|
||||
load1, load5, load15 = psutil.getloadavg()
|
||||
cpus = psutil.cpu_count()
|
||||
self.update_metric_tracker('train_time', time_spent, pair)
|
||||
self.update_metric_tracker('cpu_load1min', load1 / cpus, pair)
|
||||
self.update_metric_tracker('cpu_load5min', load5 / cpus, pair)
|
||||
self.update_metric_tracker('cpu_load15min', load15 / cpus, pair)
|
||||
|
||||
def load_drawer_from_disk(self):
|
||||
"""
|
||||
Locate and load a previously saved data drawer full of all pair model metadata in
|
||||
present model folder.
|
||||
:return: bool - whether or not the drawer was located
|
||||
Load any existing metric tracker that may be present.
|
||||
"""
|
||||
exists = self.pair_dictionary_path.is_file()
|
||||
if exists:
|
||||
with open(self.pair_dictionary_path, "r") as fp:
|
||||
self.pair_dict = json.load(fp)
|
||||
self.pair_dict = rapidjson.load(fp, number_mode=rapidjson.NM_NATIVE)
|
||||
elif not self.follow_mode:
|
||||
logger.info("Could not find existing datadrawer, starting from scratch")
|
||||
else:
|
||||
@ -110,7 +143,18 @@ class FreqaiDataDrawer:
|
||||
"sending null values back to strategy"
|
||||
)
|
||||
|
||||
return exists
|
||||
def load_metric_tracker_from_disk(self):
|
||||
"""
|
||||
Tries to load an existing metrics dictionary if the user
|
||||
wants to collect metrics.
|
||||
"""
|
||||
if self.freqai_info.get('write_metrics_to_disk', False):
|
||||
exists = self.metric_tracker_path.is_file()
|
||||
if exists:
|
||||
with open(self.metric_tracker_path, "r") as fp:
|
||||
self.metric_tracker = rapidjson.load(fp, number_mode=rapidjson.NM_NATIVE)
|
||||
else:
|
||||
logger.info("Could not find existing metric tracker, starting from scratch")
|
||||
|
||||
def load_historic_predictions_from_disk(self):
|
||||
"""
|
||||
@ -146,7 +190,7 @@ class FreqaiDataDrawer:
|
||||
|
||||
def save_historic_predictions_to_disk(self):
|
||||
"""
|
||||
Save data drawer full of all pair model metadata in present model folder.
|
||||
Save historic predictions pickle to disk
|
||||
"""
|
||||
with open(self.historic_predictions_path, "wb") as fp:
|
||||
cloudpickle.dump(self.historic_predictions, fp, protocol=cloudpickle.DEFAULT_PROTOCOL)
|
||||
@ -154,6 +198,15 @@ class FreqaiDataDrawer:
|
||||
# create a backup
|
||||
shutil.copy(self.historic_predictions_path, self.historic_predictions_bkp_path)
|
||||
|
||||
def save_metric_tracker_to_disk(self):
|
||||
"""
|
||||
Save metric tracker of all pair metrics collected.
|
||||
"""
|
||||
with self.save_lock:
|
||||
with open(self.metric_tracker_path, 'w') as fp:
|
||||
rapidjson.dump(self.metric_tracker, fp, default=self.np_encoder,
|
||||
number_mode=rapidjson.NM_NATIVE)
|
||||
|
||||
def save_drawer_to_disk(self):
|
||||
"""
|
||||
Save data drawer full of all pair model metadata in present model folder.
|
||||
@ -453,9 +506,14 @@ class FreqaiDataDrawer:
|
||||
)
|
||||
|
||||
# if self.live:
|
||||
# store as much in ram as possible to increase performance
|
||||
self.model_dictionary[coin] = model
|
||||
self.pair_dict[coin]["model_filename"] = dk.model_filename
|
||||
self.pair_dict[coin]["data_path"] = str(dk.data_path)
|
||||
if coin not in self.meta_data_dictionary:
|
||||
self.meta_data_dictionary[coin] = {}
|
||||
self.meta_data_dictionary[coin]["train_df"] = dk.data_dictionary["train_features"]
|
||||
self.meta_data_dictionary[coin]["meta_data"] = dk.data
|
||||
self.save_drawer_to_disk()
|
||||
|
||||
return
|
||||
@ -466,7 +524,7 @@ class FreqaiDataDrawer:
|
||||
presaved backtesting (prediction file loading).
|
||||
"""
|
||||
with open(dk.data_path / f"{dk.model_filename}_metadata.json", "r") as fp:
|
||||
dk.data = json.load(fp)
|
||||
dk.data = rapidjson.load(fp, number_mode=rapidjson.NM_NATIVE)
|
||||
dk.training_features_list = dk.data["training_features_list"]
|
||||
dk.label_list = dk.data["label_list"]
|
||||
|
||||
@ -492,15 +550,20 @@ class FreqaiDataDrawer:
|
||||
/ dk.data_path.parts[-1]
|
||||
)
|
||||
|
||||
if coin in self.meta_data_dictionary:
|
||||
dk.data = self.meta_data_dictionary[coin]["meta_data"]
|
||||
dk.data_dictionary["train_features"] = self.meta_data_dictionary[coin]["train_df"]
|
||||
else:
|
||||
with open(dk.data_path / f"{dk.model_filename}_metadata.json", "r") as fp:
|
||||
dk.data = json.load(fp)
|
||||
dk.training_features_list = dk.data["training_features_list"]
|
||||
dk.label_list = dk.data["label_list"]
|
||||
dk.data = rapidjson.load(fp, number_mode=rapidjson.NM_NATIVE)
|
||||
|
||||
dk.data_dictionary["train_features"] = pd.read_pickle(
|
||||
dk.data_path / f"{dk.model_filename}_trained_df.pkl"
|
||||
)
|
||||
|
||||
dk.training_features_list = dk.data["training_features_list"]
|
||||
dk.label_list = dk.data["label_list"]
|
||||
|
||||
# try to access model in memory instead of loading object from disk to save time
|
||||
if dk.live and coin in self.model_dictionary:
|
||||
model = self.model_dictionary[coin]
|
||||
@ -627,22 +690,3 @@ class FreqaiDataDrawer:
|
||||
).reset_index(drop=True)
|
||||
|
||||
return corr_dataframes, base_dataframes
|
||||
|
||||
# to be used if we want to send predictions directly to the follower instead of forcing
|
||||
# follower to load models and inference
|
||||
# def save_model_return_values_to_disk(self) -> None:
|
||||
# with open(self.full_path / str('model_return_values.json'), "w") as fp:
|
||||
# json.dump(self.model_return_values, fp, default=self.np_encoder)
|
||||
|
||||
# def load_model_return_values_from_disk(self, dk: FreqaiDataKitchen) -> FreqaiDataKitchen:
|
||||
# exists = Path(self.full_path / str('model_return_values.json')).resolve().exists()
|
||||
# if exists:
|
||||
# with open(self.full_path / str('model_return_values.json'), "r") as fp:
|
||||
# self.model_return_values = json.load(fp)
|
||||
# elif not self.follow_mode:
|
||||
# logger.info("Could not find existing datadrawer, starting from scratch")
|
||||
# else:
|
||||
# logger.warning(f'Follower could not find pair_dictionary at {self.full_path} '
|
||||
# 'sending null values back to strategy')
|
||||
|
||||
# return exists, dk
|
||||
|
@ -7,7 +7,7 @@ from collections import deque
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from threading import Lock
|
||||
from typing import Any, Dict, List, Tuple
|
||||
from typing import Any, Dict, List, Literal, Tuple
|
||||
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
@ -148,7 +148,7 @@ class IFreqaiModel(ABC):
|
||||
dataframe = dk.remove_features_from_df(dk.return_dataframe)
|
||||
self.clean_up()
|
||||
if self.live:
|
||||
self.inference_timer('stop')
|
||||
self.inference_timer('stop', metadata["pair"])
|
||||
return dataframe
|
||||
|
||||
def clean_up(self):
|
||||
@ -217,12 +217,14 @@ class IFreqaiModel(ABC):
|
||||
logger.warning(f"Training {pair} raised exception {msg.__class__.__name__}. "
|
||||
f"Message: {msg}, skipping.")
|
||||
|
||||
self.train_timer('stop')
|
||||
self.train_timer('stop', pair)
|
||||
|
||||
# only rotate the queue after the first has been trained.
|
||||
self.train_queue.rotate(-1)
|
||||
|
||||
self.dd.save_historic_predictions_to_disk()
|
||||
if self.freqai_info.get('write_metrics_to_disk', False):
|
||||
self.dd.save_metric_tracker_to_disk()
|
||||
|
||||
def start_backtesting(
|
||||
self, dataframe: DataFrame, metadata: dict, dk: FreqaiDataKitchen
|
||||
@ -677,7 +679,7 @@ class IFreqaiModel(ABC):
|
||||
|
||||
return
|
||||
|
||||
def inference_timer(self, do='start'):
|
||||
def inference_timer(self, do: Literal['start', 'stop'] = 'start', pair: str = ''):
|
||||
"""
|
||||
Timer designed to track the cumulative time spent in FreqAI for one pass through
|
||||
the whitelist. This will check if the time spent is more than 1/4 the time
|
||||
@ -688,7 +690,10 @@ class IFreqaiModel(ABC):
|
||||
self.begin_time = time.time()
|
||||
elif do == 'stop':
|
||||
end = time.time()
|
||||
self.inference_time += (end - self.begin_time)
|
||||
time_spent = (end - self.begin_time)
|
||||
if self.freqai_info.get('write_metrics_to_disk', False):
|
||||
self.dd.update_metric_tracker('inference_time', time_spent, pair)
|
||||
self.inference_time += time_spent
|
||||
if self.pair_it == self.total_pairs:
|
||||
logger.info(
|
||||
f'Total time spent inferencing pairlist {self.inference_time:.2f} seconds')
|
||||
@ -699,7 +704,7 @@ class IFreqaiModel(ABC):
|
||||
self.inference_time = 0
|
||||
return
|
||||
|
||||
def train_timer(self, do='start'):
|
||||
def train_timer(self, do: Literal['start', 'stop'] = 'start', pair: str = ''):
|
||||
"""
|
||||
Timer designed to track the cumulative time spent training the full pairlist in
|
||||
FreqAI.
|
||||
@ -709,7 +714,11 @@ class IFreqaiModel(ABC):
|
||||
self.begin_time_train = time.time()
|
||||
elif do == 'stop':
|
||||
end = time.time()
|
||||
self.train_time += (end - self.begin_time_train)
|
||||
time_spent = (end - self.begin_time_train)
|
||||
if self.freqai_info.get('write_metrics_to_disk', False):
|
||||
self.dd.collect_metrics(time_spent, pair)
|
||||
|
||||
self.train_time += time_spent
|
||||
if self.pair_it_train == self.total_pairs:
|
||||
logger.info(
|
||||
f'Total time spent training pairlist {self.train_time:.2f} seconds')
|
||||
|
@ -1,4 +1,5 @@
|
||||
import logging
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict
|
||||
|
||||
@ -30,6 +31,14 @@ class CatboostClassifier(BaseClassifierModel):
|
||||
label=data_dictionary["train_labels"],
|
||||
weight=data_dictionary["train_weights"],
|
||||
)
|
||||
if self.freqai_info.get("data_split_parameters", {}).get("test_size", 0.1) == 0:
|
||||
test_data = None
|
||||
else:
|
||||
test_data = Pool(
|
||||
data=data_dictionary["test_features"],
|
||||
label=data_dictionary["test_labels"],
|
||||
weight=data_dictionary["test_weights"],
|
||||
)
|
||||
|
||||
cbr = CatBoostClassifier(
|
||||
allow_writing_files=True,
|
||||
@ -40,6 +49,7 @@ class CatboostClassifier(BaseClassifierModel):
|
||||
|
||||
init_model = self.get_init_model(dk.pair)
|
||||
|
||||
cbr.fit(train_data, init_model=init_model)
|
||||
cbr.fit(X=train_data, eval_set=test_data, init_model=init_model,
|
||||
log_cout=sys.stdout, log_cerr=sys.stderr)
|
||||
|
||||
return cbr
|
||||
|
@ -1,4 +1,5 @@
|
||||
import logging
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict
|
||||
|
||||
@ -47,6 +48,7 @@ class CatboostRegressor(BaseRegressionModel):
|
||||
**self.model_training_parameters,
|
||||
)
|
||||
|
||||
model.fit(X=train_data, eval_set=test_data, init_model=init_model)
|
||||
model.fit(X=train_data, eval_set=test_data, init_model=init_model,
|
||||
log_cout=sys.stdout, log_cerr=sys.stderr)
|
||||
|
||||
return model
|
||||
|
@ -1,4 +1,5 @@
|
||||
import logging
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict
|
||||
|
||||
@ -58,8 +59,10 @@ class CatboostRegressorMultiTarget(BaseRegressionModel):
|
||||
|
||||
fit_params = []
|
||||
for i in range(len(eval_sets)):
|
||||
fit_params.append(
|
||||
{'eval_set': eval_sets[i], 'init_model': init_models[i]})
|
||||
fit_params.append({
|
||||
'eval_set': eval_sets[i], 'init_model': init_models[i],
|
||||
'log_cout': sys.stdout, 'log_cerr': sys.stderr,
|
||||
})
|
||||
|
||||
model = FreqaiMultiOutputRegressor(estimator=cbr)
|
||||
thread_training = self.freqai_info.get('multitarget_parallel_training', False)
|
||||
|
85
freqtrade/freqai/prediction_models/XGBoostRFClassifier.py
Normal file
85
freqtrade/freqai/prediction_models/XGBoostRFClassifier.py
Normal file
@ -0,0 +1,85 @@
|
||||
import logging
|
||||
from typing import Any, Dict, Tuple
|
||||
|
||||
import numpy as np
|
||||
import numpy.typing as npt
|
||||
import pandas as pd
|
||||
from pandas import DataFrame
|
||||
from pandas.api.types import is_integer_dtype
|
||||
from sklearn.preprocessing import LabelEncoder
|
||||
from xgboost import XGBRFClassifier
|
||||
|
||||
from freqtrade.freqai.base_models.BaseClassifierModel import BaseClassifierModel
|
||||
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class XGBoostRFClassifier(BaseClassifierModel):
|
||||
"""
|
||||
User created prediction model. The class needs to override three necessary
|
||||
functions, predict(), train(), fit(). The class inherits ModelHandler which
|
||||
has its own DataHandler where data is held, saved, loaded, and managed.
|
||||
"""
|
||||
|
||||
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
|
||||
"""
|
||||
User sets up the training and test data to fit their desired model here
|
||||
:params:
|
||||
:data_dictionary: the dictionary constructed by DataHandler to hold
|
||||
all the training and test data/labels.
|
||||
"""
|
||||
|
||||
X = data_dictionary["train_features"].to_numpy()
|
||||
y = data_dictionary["train_labels"].to_numpy()[:, 0]
|
||||
|
||||
le = LabelEncoder()
|
||||
if not is_integer_dtype(y):
|
||||
y = pd.Series(le.fit_transform(y), dtype="int64")
|
||||
|
||||
if self.freqai_info.get('data_split_parameters', {}).get('test_size', 0.1) == 0:
|
||||
eval_set = None
|
||||
else:
|
||||
test_features = data_dictionary["test_features"].to_numpy()
|
||||
test_labels = data_dictionary["test_labels"].to_numpy()[:, 0]
|
||||
|
||||
if not is_integer_dtype(test_labels):
|
||||
test_labels = pd.Series(le.transform(test_labels), dtype="int64")
|
||||
|
||||
eval_set = [(test_features, test_labels)]
|
||||
|
||||
train_weights = data_dictionary["train_weights"]
|
||||
|
||||
init_model = self.get_init_model(dk.pair)
|
||||
|
||||
model = XGBRFClassifier(**self.model_training_parameters)
|
||||
|
||||
model.fit(X=X, y=y, eval_set=eval_set, sample_weight=train_weights,
|
||||
xgb_model=init_model)
|
||||
|
||||
return model
|
||||
|
||||
def predict(
|
||||
self, unfiltered_df: DataFrame, dk: FreqaiDataKitchen, **kwargs
|
||||
) -> Tuple[DataFrame, npt.NDArray[np.int_]]:
|
||||
"""
|
||||
Filter the prediction features data and predict with it.
|
||||
:param: unfiltered_df: Full dataframe for the current backtest period.
|
||||
:return:
|
||||
:pred_df: dataframe containing the predictions
|
||||
:do_predict: np.array of 1s and 0s to indicate places where freqai needed to remove
|
||||
data (NaNs) or felt uncertain about data (PCA and DI index)
|
||||
"""
|
||||
|
||||
(pred_df, dk.do_predict) = super().predict(unfiltered_df, dk, **kwargs)
|
||||
|
||||
le = LabelEncoder()
|
||||
label = dk.label_list[0]
|
||||
labels_before = list(dk.data['labels_std'].keys())
|
||||
labels_after = le.fit_transform(labels_before).tolist()
|
||||
pred_df[label] = le.inverse_transform(pred_df[label])
|
||||
pred_df = pred_df.rename(
|
||||
columns={labels_after[i]: labels_before[i] for i in range(len(labels_before))})
|
||||
|
||||
return (pred_df, dk.do_predict)
|
45
freqtrade/freqai/prediction_models/XGBoostRFRegressor.py
Normal file
45
freqtrade/freqai/prediction_models/XGBoostRFRegressor.py
Normal file
@ -0,0 +1,45 @@
|
||||
import logging
|
||||
from typing import Any, Dict
|
||||
|
||||
from xgboost import XGBRFRegressor
|
||||
|
||||
from freqtrade.freqai.base_models.BaseRegressionModel import BaseRegressionModel
|
||||
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class XGBoostRFRegressor(BaseRegressionModel):
|
||||
"""
|
||||
User created prediction model. The class needs to override three necessary
|
||||
functions, predict(), train(), fit(). The class inherits ModelHandler which
|
||||
has its own DataHandler where data is held, saved, loaded, and managed.
|
||||
"""
|
||||
|
||||
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
|
||||
"""
|
||||
User sets up the training and test data to fit their desired model here
|
||||
:param data_dictionary: the dictionary constructed by DataHandler to hold
|
||||
all the training and test data/labels.
|
||||
"""
|
||||
|
||||
X = data_dictionary["train_features"]
|
||||
y = data_dictionary["train_labels"]
|
||||
|
||||
if self.freqai_info.get("data_split_parameters", {}).get("test_size", 0.1) == 0:
|
||||
eval_set = None
|
||||
else:
|
||||
eval_set = [(data_dictionary["test_features"], data_dictionary["test_labels"])]
|
||||
eval_weights = [data_dictionary['test_weights']]
|
||||
|
||||
sample_weight = data_dictionary["train_weights"]
|
||||
|
||||
xgb_model = self.get_init_model(dk.pair)
|
||||
|
||||
model = XGBRFRegressor(**self.model_training_parameters)
|
||||
|
||||
model.fit(X=X, y=y, sample_weight=sample_weight, eval_set=eval_set,
|
||||
sample_weight_eval_set=eval_weights, xgb_model=xgb_model)
|
||||
|
||||
return model
|
@ -155,6 +155,8 @@ class Backtesting:
|
||||
self.trading_mode: TradingMode = config.get('trading_mode', TradingMode.SPOT)
|
||||
# strategies which define "can_short=True" will fail to load in Spot mode.
|
||||
self._can_short = self.trading_mode != TradingMode.SPOT
|
||||
self._position_stacking: bool = self.config.get('position_stacking', False)
|
||||
self.enable_protections: bool = self.config.get('enable_protections', False)
|
||||
|
||||
self.init_backtest()
|
||||
|
||||
@ -923,14 +925,12 @@ class Backtesting:
|
||||
return trade
|
||||
|
||||
def handle_left_open(self, open_trades: Dict[str, List[LocalTrade]],
|
||||
data: Dict[str, List[Tuple]]) -> List[LocalTrade]:
|
||||
data: Dict[str, List[Tuple]]) -> None:
|
||||
"""
|
||||
Handling of left open trades at the end of backtesting
|
||||
"""
|
||||
trades = []
|
||||
for pair in open_trades.keys():
|
||||
if len(open_trades[pair]) > 0:
|
||||
for trade in open_trades[pair]:
|
||||
for trade in list(open_trades[pair]):
|
||||
if trade.open_order_id and trade.nr_of_successful_entries == 0:
|
||||
# Ignore trade if entry-order did not fill yet
|
||||
continue
|
||||
@ -942,11 +942,6 @@ class Backtesting:
|
||||
trade.exit_reason = ExitType.FORCE_EXIT.value
|
||||
trade.close(exit_row[OPEN_IDX], show_msg=False)
|
||||
LocalTrade.close_bt_trade(trade)
|
||||
# Deepcopy object to have wallets update correctly
|
||||
trade1 = deepcopy(trade)
|
||||
trade1.is_open = True
|
||||
trades.append(trade1)
|
||||
return trades
|
||||
|
||||
def trade_slot_available(self, max_open_trades: int, open_trade_count: int) -> bool:
|
||||
# Always allow trades when max_open_trades is enabled.
|
||||
@ -970,9 +965,8 @@ class Backtesting:
|
||||
return 'short'
|
||||
return None
|
||||
|
||||
def run_protections(
|
||||
self, enable_protections, pair: str, current_time: datetime, side: LongShort):
|
||||
if enable_protections:
|
||||
def run_protections(self, pair: str, current_time: datetime, side: LongShort):
|
||||
if self.enable_protections:
|
||||
self.protections.stop_per_pair(pair, current_time, side)
|
||||
self.protections.global_stop(current_time, side)
|
||||
|
||||
@ -1078,65 +1072,20 @@ class Backtesting:
|
||||
return None
|
||||
return row
|
||||
|
||||
def backtest(self, processed: Dict, # noqa: max-complexity: 13
|
||||
start_date: datetime, end_date: datetime,
|
||||
max_open_trades: int = 0, position_stacking: bool = False,
|
||||
enable_protections: bool = False) -> Dict[str, Any]:
|
||||
def backtest_loop(
|
||||
self, row: Tuple, pair: str, current_time: datetime, end_date: datetime,
|
||||
max_open_trades: int, open_trade_count_start: int) -> int:
|
||||
"""
|
||||
Implement backtesting functionality
|
||||
|
||||
NOTE: This method is used by Hyperopt at each iteration. Please keep it optimized.
|
||||
Of course try to not have ugly code. By some accessor are sometime slower than functions.
|
||||
Avoid extensive logging in this method and functions it calls.
|
||||
|
||||
:param processed: a processed dictionary with format {pair, data}, which gets cleared to
|
||||
optimize memory usage!
|
||||
:param start_date: backtesting timerange start datetime
|
||||
:param end_date: backtesting timerange end datetime
|
||||
:param max_open_trades: maximum number of concurrent trades, <= 0 means unlimited
|
||||
:param position_stacking: do we allow position stacking?
|
||||
:param enable_protections: Should protections be enabled?
|
||||
:return: DataFrame with trades (results of backtesting)
|
||||
Backtesting processing for one candle/pair.
|
||||
"""
|
||||
trades: List[LocalTrade] = []
|
||||
self.prepare_backtest(enable_protections)
|
||||
# Ensure wallets are uptodate (important for --strategy-list)
|
||||
self.wallets.update()
|
||||
# Use dict of lists with data for performance
|
||||
# (looping lists is a lot faster than pandas DataFrames)
|
||||
data: Dict = self._get_ohlcv_as_lists(processed)
|
||||
|
||||
# Indexes per pair, so some pairs are allowed to have a missing start.
|
||||
indexes: Dict = defaultdict(int)
|
||||
current_time = start_date + timedelta(minutes=self.timeframe_min)
|
||||
|
||||
open_trades: Dict[str, List[LocalTrade]] = defaultdict(list)
|
||||
open_trade_count = 0
|
||||
|
||||
self.progress.init_step(BacktestState.BACKTEST, int(
|
||||
(end_date - start_date) / timedelta(minutes=self.timeframe_min)))
|
||||
|
||||
# Loop timerange and get candle for each pair at that point in time
|
||||
while current_time <= end_date:
|
||||
open_trade_count_start = open_trade_count
|
||||
self.check_abort()
|
||||
for i, pair in enumerate(data):
|
||||
row_index = indexes[pair]
|
||||
row = self.validate_row(data, pair, row_index, current_time)
|
||||
if not row:
|
||||
continue
|
||||
|
||||
row_index += 1
|
||||
indexes[pair] = row_index
|
||||
self.dataprovider._set_dataframe_max_index(row_index)
|
||||
|
||||
for t in list(open_trades[pair]):
|
||||
for t in list(LocalTrade.bt_trades_open_pp[pair]):
|
||||
# 1. Manage currently open orders of active trades
|
||||
if self.manage_open_orders(t, current_time, row):
|
||||
# Close trade
|
||||
open_trade_count -= 1
|
||||
open_trades[pair].remove(t)
|
||||
LocalTrade.trades_open.remove(t)
|
||||
open_trade_count_start -= 1
|
||||
LocalTrade.remove_bt_trade(t)
|
||||
self.wallets.update()
|
||||
|
||||
# 2. Process entries.
|
||||
@ -1145,7 +1094,7 @@ class Backtesting:
|
||||
# don't open on the last row
|
||||
trade_dir = self.check_for_trade_entry(row)
|
||||
if (
|
||||
(position_stacking or len(open_trades[pair]) == 0)
|
||||
(self._position_stacking or len(LocalTrade.bt_trades_open_pp[pair]) == 0)
|
||||
and self.trade_slot_available(max_open_trades, open_trade_count_start)
|
||||
and current_time != end_date
|
||||
and trade_dir is not None
|
||||
@ -1157,13 +1106,11 @@ class Backtesting:
|
||||
# This emulates previous behavior - not sure if this is correct
|
||||
# Prevents entering if the trade-slot was freed in this candle
|
||||
open_trade_count_start += 1
|
||||
open_trade_count += 1
|
||||
# logger.debug(f"{pair} - Emulate creation of new trade: {trade}.")
|
||||
open_trades[pair].append(trade)
|
||||
LocalTrade.add_bt_trade(trade)
|
||||
self.wallets.update()
|
||||
|
||||
for trade in list(open_trades[pair]):
|
||||
for trade in list(LocalTrade.bt_trades_open_pp[pair]):
|
||||
# 3. Process entry orders.
|
||||
order = trade.select_order(trade.entry_side, is_open=True)
|
||||
if order and self._get_order_filled(order.price, row):
|
||||
@ -1189,22 +1136,67 @@ class Backtesting:
|
||||
trade.close(order.price, show_msg=False)
|
||||
|
||||
# logger.debug(f"{pair} - Backtesting exit {trade}")
|
||||
open_trade_count -= 1
|
||||
open_trades[pair].remove(trade)
|
||||
LocalTrade.close_bt_trade(trade)
|
||||
trades.append(trade)
|
||||
self.wallets.update()
|
||||
self.run_protections(
|
||||
enable_protections, pair, current_time, trade.trade_direction)
|
||||
self.run_protections(pair, current_time, trade.trade_direction)
|
||||
return open_trade_count_start
|
||||
|
||||
def backtest(self, processed: Dict,
|
||||
start_date: datetime, end_date: datetime,
|
||||
max_open_trades: int = 0) -> Dict[str, Any]:
|
||||
"""
|
||||
Implement backtesting functionality
|
||||
|
||||
NOTE: This method is used by Hyperopt at each iteration. Please keep it optimized.
|
||||
Of course try to not have ugly code. By some accessor are sometime slower than functions.
|
||||
Avoid extensive logging in this method and functions it calls.
|
||||
|
||||
:param processed: a processed dictionary with format {pair, data}, which gets cleared to
|
||||
optimize memory usage!
|
||||
:param start_date: backtesting timerange start datetime
|
||||
:param end_date: backtesting timerange end datetime
|
||||
:param max_open_trades: maximum number of concurrent trades, <= 0 means unlimited
|
||||
:return: DataFrame with trades (results of backtesting)
|
||||
"""
|
||||
self.prepare_backtest(self.enable_protections)
|
||||
# Ensure wallets are uptodate (important for --strategy-list)
|
||||
self.wallets.update()
|
||||
# Use dict of lists with data for performance
|
||||
# (looping lists is a lot faster than pandas DataFrames)
|
||||
data: Dict = self._get_ohlcv_as_lists(processed)
|
||||
|
||||
# Indexes per pair, so some pairs are allowed to have a missing start.
|
||||
indexes: Dict = defaultdict(int)
|
||||
current_time = start_date + timedelta(minutes=self.timeframe_min)
|
||||
|
||||
self.progress.init_step(BacktestState.BACKTEST, int(
|
||||
(end_date - start_date) / timedelta(minutes=self.timeframe_min)))
|
||||
|
||||
# Loop timerange and get candle for each pair at that point in time
|
||||
while current_time <= end_date:
|
||||
open_trade_count_start = LocalTrade.bt_open_open_trade_count
|
||||
self.check_abort()
|
||||
for i, pair in enumerate(data):
|
||||
row_index = indexes[pair]
|
||||
row = self.validate_row(data, pair, row_index, current_time)
|
||||
if not row:
|
||||
continue
|
||||
|
||||
row_index += 1
|
||||
indexes[pair] = row_index
|
||||
self.dataprovider._set_dataframe_max_index(row_index)
|
||||
|
||||
open_trade_count_start = self.backtest_loop(
|
||||
row, pair, current_time, end_date, max_open_trades, open_trade_count_start)
|
||||
|
||||
# Move time one configured time_interval ahead.
|
||||
self.progress.increment()
|
||||
current_time += timedelta(minutes=self.timeframe_min)
|
||||
|
||||
trades += self.handle_left_open(open_trades, data=data)
|
||||
self.handle_left_open(LocalTrade.bt_trades_open_pp, data=data)
|
||||
self.wallets.update()
|
||||
|
||||
results = trade_list_to_dataframe(trades)
|
||||
results = trade_list_to_dataframe(LocalTrade.trades)
|
||||
return {
|
||||
'results': results,
|
||||
'config': self.strategy.config,
|
||||
@ -1257,8 +1249,6 @@ class Backtesting:
|
||||
start_date=min_date,
|
||||
end_date=max_date,
|
||||
max_open_trades=max_open_trades,
|
||||
position_stacking=self.config.get('position_stacking', False),
|
||||
enable_protections=self.config.get('enable_protections', False),
|
||||
)
|
||||
backtest_end_time = datetime.now(timezone.utc)
|
||||
results.update({
|
||||
|
@ -122,7 +122,6 @@ class Hyperopt:
|
||||
else:
|
||||
logger.debug('Ignoring max_open_trades (--disable-max-market-positions was used) ...')
|
||||
self.max_open_trades = 0
|
||||
self.position_stacking = self.config.get('position_stacking', False)
|
||||
|
||||
if HyperoptTools.has_space(self.config, 'sell'):
|
||||
# Make sure use_exit_signal is enabled
|
||||
@ -258,6 +257,7 @@ class Hyperopt:
|
||||
logger.debug("Hyperopt has 'protection' space")
|
||||
# Enable Protections if protection space is selected.
|
||||
self.config['enable_protections'] = True
|
||||
self.backtesting.enable_protections = True
|
||||
self.protection_space = self.custom_hyperopt.protection_space()
|
||||
|
||||
if HyperoptTools.has_space(self.config, 'buy'):
|
||||
@ -339,8 +339,6 @@ class Hyperopt:
|
||||
start_date=self.min_date,
|
||||
end_date=self.max_date,
|
||||
max_open_trades=self.max_open_trades,
|
||||
position_stacking=self.position_stacking,
|
||||
enable_protections=self.config.get('enable_protections', False),
|
||||
)
|
||||
backtest_end_time = datetime.now(timezone.utc)
|
||||
bt_results.update({
|
||||
|
@ -12,7 +12,7 @@ import tabulate
|
||||
from colorama import Fore, Style
|
||||
from pandas import isna, json_normalize
|
||||
|
||||
from freqtrade.constants import FTHYPT_FILEVERSION, USERPATH_STRATEGIES, Config
|
||||
from freqtrade.constants import FTHYPT_FILEVERSION, Config
|
||||
from freqtrade.enums import HyperoptState
|
||||
from freqtrade.exceptions import OperationalException
|
||||
from freqtrade.misc import deep_merge_dicts, round_coin_value, round_dict, safe_value_fallback2
|
||||
@ -50,9 +50,8 @@ class HyperoptTools():
|
||||
Get Strategy-location (filename) from strategy_name
|
||||
"""
|
||||
from freqtrade.resolvers.strategy_resolver import StrategyResolver
|
||||
directory = Path(config.get('strategy_path', config['user_data_dir'] / USERPATH_STRATEGIES))
|
||||
strategy_objs = StrategyResolver.search_all_objects(
|
||||
directory, False, config.get('recursive_strategy_search', False))
|
||||
config, False, config.get('recursive_strategy_search', False))
|
||||
strategies = [s for s in strategy_objs if s['name'] == strategy_name]
|
||||
if strategies:
|
||||
strategy = strategies[0]
|
||||
|
@ -408,10 +408,10 @@ def generate_strategy_stats(pairlist: List[str],
|
||||
|
||||
exit_reason_stats = generate_exit_reason_stats(max_open_trades=max_open_trades,
|
||||
results=results)
|
||||
left_open_results = generate_pair_metrics(pairlist, stake_currency=stake_currency,
|
||||
starting_balance=start_balance,
|
||||
results=results.loc[results['is_open']],
|
||||
skip_nan=True)
|
||||
left_open_results = generate_pair_metrics(
|
||||
pairlist, stake_currency=stake_currency, starting_balance=start_balance,
|
||||
results=results.loc[results['exit_reason'] == 'force_exit'], skip_nan=True)
|
||||
|
||||
daily_stats = generate_daily_stats(results)
|
||||
trade_stats = generate_trading_stats(results)
|
||||
best_pair = max([pair for pair in pair_results if pair['key'] != 'TOTAL'],
|
||||
|
@ -2,6 +2,7 @@
|
||||
This module contains the class to persist trades into SQLite
|
||||
"""
|
||||
import logging
|
||||
from collections import defaultdict
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from math import isclose
|
||||
from typing import Any, Dict, List, Optional
|
||||
@ -255,6 +256,9 @@ class LocalTrade():
|
||||
# Trades container for backtesting
|
||||
trades: List['LocalTrade'] = []
|
||||
trades_open: List['LocalTrade'] = []
|
||||
# Copy of trades_open - but indexed by pair
|
||||
bt_trades_open_pp: Dict[str, List['LocalTrade']] = defaultdict(list)
|
||||
bt_open_open_trade_count: int = 0
|
||||
total_profit: float = 0
|
||||
realized_profit: float = 0
|
||||
|
||||
@ -538,6 +542,8 @@ class LocalTrade():
|
||||
"""
|
||||
LocalTrade.trades = []
|
||||
LocalTrade.trades_open = []
|
||||
LocalTrade.bt_trades_open_pp = defaultdict(list)
|
||||
LocalTrade.bt_open_open_trade_count = 0
|
||||
LocalTrade.total_profit = 0
|
||||
|
||||
def adjust_min_max_rates(self, current_price: float, current_price_low: float) -> None:
|
||||
@ -1067,6 +1073,8 @@ class LocalTrade():
|
||||
@staticmethod
|
||||
def close_bt_trade(trade):
|
||||
LocalTrade.trades_open.remove(trade)
|
||||
LocalTrade.bt_trades_open_pp[trade.pair].remove(trade)
|
||||
LocalTrade.bt_open_open_trade_count -= 1
|
||||
LocalTrade.trades.append(trade)
|
||||
LocalTrade.total_profit += trade.close_profit_abs
|
||||
|
||||
@ -1074,9 +1082,17 @@ class LocalTrade():
|
||||
def add_bt_trade(trade):
|
||||
if trade.is_open:
|
||||
LocalTrade.trades_open.append(trade)
|
||||
LocalTrade.bt_trades_open_pp[trade.pair].append(trade)
|
||||
LocalTrade.bt_open_open_trade_count += 1
|
||||
else:
|
||||
LocalTrade.trades.append(trade)
|
||||
|
||||
@staticmethod
|
||||
def remove_bt_trade(trade):
|
||||
LocalTrade.trades_open.remove(trade)
|
||||
LocalTrade.bt_trades_open_pp[trade.pair].remove(trade)
|
||||
LocalTrade.bt_open_open_trade_count -= 1
|
||||
|
||||
@staticmethod
|
||||
def get_open_trades() -> List[Any]:
|
||||
"""
|
||||
@ -1092,7 +1108,7 @@ class LocalTrade():
|
||||
if Trade.use_db:
|
||||
return Trade.query.filter(Trade.is_open.is_(True)).count()
|
||||
else:
|
||||
return len(LocalTrade.trades_open)
|
||||
return LocalTrade.bt_open_open_trade_count
|
||||
|
||||
@staticmethod
|
||||
def stoploss_reinitialization(desired_stoploss):
|
||||
|
@ -26,6 +26,7 @@ class FreqaiModelResolver(IResolver):
|
||||
initial_search_path = (
|
||||
Path(__file__).parent.parent.joinpath("freqai/prediction_models").resolve()
|
||||
)
|
||||
extra_path = "freqaimodel_path"
|
||||
|
||||
@staticmethod
|
||||
def load_freqaimodel(config: Config) -> IFreqaiModel:
|
||||
@ -50,7 +51,6 @@ class FreqaiModelResolver(IResolver):
|
||||
freqaimodel_name,
|
||||
config,
|
||||
kwargs={"config": config},
|
||||
extra_dir=config.get("freqaimodel_path"),
|
||||
)
|
||||
|
||||
return freqaimodel
|
||||
|
@ -42,6 +42,8 @@ class IResolver:
|
||||
object_type_str: str
|
||||
user_subdir: Optional[str] = None
|
||||
initial_search_path: Optional[Path]
|
||||
# Optional config setting containing a path (strategy_path, freqaimodel_path)
|
||||
extra_path: Optional[str] = None
|
||||
|
||||
@classmethod
|
||||
def build_search_paths(cls, config: Config, user_subdir: Optional[str] = None,
|
||||
@ -58,6 +60,9 @@ class IResolver:
|
||||
for dir in extra_dirs:
|
||||
abs_paths.insert(0, Path(dir).resolve())
|
||||
|
||||
if cls.extra_path and (extra := config.get(cls.extra_path)):
|
||||
abs_paths.insert(0, Path(extra).resolve())
|
||||
|
||||
return abs_paths
|
||||
|
||||
@classmethod
|
||||
@ -183,9 +188,35 @@ class IResolver:
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def search_all_objects(cls, directory: Path, enum_failed: bool,
|
||||
def search_all_objects(cls, config: Config, enum_failed: bool,
|
||||
recursive: bool = False) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Searches for valid objects
|
||||
:param config: Config object
|
||||
:param enum_failed: If True, will return None for modules which fail.
|
||||
Otherwise, failing modules are skipped.
|
||||
:param recursive: Recursively walk directory tree searching for strategies
|
||||
:return: List of dicts containing 'name', 'class' and 'location' entries
|
||||
"""
|
||||
result = []
|
||||
|
||||
abs_paths = cls.build_search_paths(config, user_subdir=cls.user_subdir)
|
||||
for path in abs_paths:
|
||||
result.extend(cls._search_all_objects(path, enum_failed, recursive))
|
||||
return result
|
||||
|
||||
@classmethod
|
||||
def _build_rel_location(cls, directory: Path, entry: Path) -> str:
|
||||
|
||||
builtin = cls.initial_search_path == directory
|
||||
return f"<builtin>/{entry.relative_to(directory)}" if builtin else str(
|
||||
entry.relative_to(directory))
|
||||
|
||||
@classmethod
|
||||
def _search_all_objects(
|
||||
cls, directory: Path, enum_failed: bool, recursive: bool = False,
|
||||
basedir: Optional[Path] = None) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Searches a directory for valid objects
|
||||
:param directory: Path to search
|
||||
:param enum_failed: If True, will return None for modules which fail.
|
||||
@ -204,7 +235,8 @@ class IResolver:
|
||||
and not entry.name.startswith('__')
|
||||
and not entry.name.startswith('.')
|
||||
):
|
||||
objects.extend(cls.search_all_objects(entry, enum_failed, recursive=recursive))
|
||||
objects.extend(cls._search_all_objects(
|
||||
entry, enum_failed, recursive, basedir or directory))
|
||||
# Only consider python files
|
||||
if entry.suffix != '.py':
|
||||
logger.debug('Ignoring %s', entry)
|
||||
@ -217,5 +249,6 @@ class IResolver:
|
||||
{'name': obj[0].__name__ if obj is not None else '',
|
||||
'class': obj[0] if obj is not None else None,
|
||||
'location': entry,
|
||||
'location_rel': cls._build_rel_location(basedir or directory, entry),
|
||||
})
|
||||
return objects
|
||||
|
@ -30,6 +30,7 @@ class StrategyResolver(IResolver):
|
||||
object_type_str = "Strategy"
|
||||
user_subdir = USERPATH_STRATEGIES
|
||||
initial_search_path = None
|
||||
extra_path = "strategy_path"
|
||||
|
||||
@staticmethod
|
||||
def load_strategy(config: Config = None) -> IStrategy:
|
||||
|
@ -89,6 +89,7 @@ async def api_start_backtest(bt_settings: BacktestRequest, background_tasks: Bac
|
||||
lastconfig['enable_protections'] = btconfig.get('enable_protections')
|
||||
lastconfig['dry_run_wallet'] = btconfig.get('dry_run_wallet')
|
||||
|
||||
ApiServer._bt.enable_protections = btconfig.get('enable_protections', False)
|
||||
ApiServer._bt.strategylist = [strat]
|
||||
ApiServer._bt.results = {}
|
||||
ApiServer._bt.load_prior_backtest()
|
||||
|
@ -1,13 +1,11 @@
|
||||
import logging
|
||||
from copy import deepcopy
|
||||
from pathlib import Path
|
||||
from typing import List, Optional
|
||||
|
||||
from fastapi import APIRouter, Depends, Query
|
||||
from fastapi.exceptions import HTTPException
|
||||
|
||||
from freqtrade import __version__
|
||||
from freqtrade.constants import USERPATH_STRATEGIES
|
||||
from freqtrade.data.history import get_datahandler
|
||||
from freqtrade.enums import CandleType, TradingMode
|
||||
from freqtrade.exceptions import OperationalException
|
||||
@ -253,11 +251,9 @@ def plot_config(rpc: RPC = Depends(get_rpc)):
|
||||
|
||||
@router.get('/strategies', response_model=StrategyListResponse, tags=['strategy'])
|
||||
def list_strategies(config=Depends(get_config)):
|
||||
directory = Path(config.get(
|
||||
'strategy_path', config['user_data_dir'] / USERPATH_STRATEGIES))
|
||||
from freqtrade.resolvers.strategy_resolver import StrategyResolver
|
||||
strategies = StrategyResolver.search_all_objects(
|
||||
directory, False, config.get('recursive_strategy_search', False))
|
||||
config, False, config.get('recursive_strategy_search', False))
|
||||
strategies = sorted(strategies, key=lambda x: x['name'])
|
||||
|
||||
return {'strategies': [x['name'] for x in strategies]}
|
||||
|
@ -1,3 +1,4 @@
|
||||
import asyncio
|
||||
import logging
|
||||
from typing import Any, Dict
|
||||
|
||||
@ -89,6 +90,8 @@ async def _process_consumer_request(
|
||||
for _, message in analyzed_df.items():
|
||||
response = WSAnalyzedDFMessage(data=message)
|
||||
await channel.send(response.dict(exclude_none=True))
|
||||
# Throttle the messages to 50/s
|
||||
await asyncio.sleep(0.02)
|
||||
|
||||
|
||||
@router.websocket("/message/ws")
|
||||
|
@ -198,6 +198,10 @@ class ApiServer(RPCHandler):
|
||||
logger.debug(f"Found message of type: {message.get('type')}")
|
||||
# Broadcast it
|
||||
await self._ws_channel_manager.broadcast(message)
|
||||
# Limit messages per sec.
|
||||
# Could cause problems with queue size if too low, and
|
||||
# problems with network traffik if too high.
|
||||
await asyncio.sleep(0.001)
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
|
||||
|
@ -27,4 +27,4 @@ types-cachetools==5.2.1
|
||||
types-filelock==3.2.7
|
||||
types-requests==2.28.11.2
|
||||
types-tabulate==0.9.0.0
|
||||
types-python-dateutil==2.8.19
|
||||
types-python-dateutil==2.8.19.1
|
||||
|
@ -5,6 +5,6 @@
|
||||
scikit-learn==1.1.2
|
||||
joblib==1.2.0
|
||||
catboost==1.1; platform_machine != 'aarch64'
|
||||
lightgbm==3.3.2
|
||||
lightgbm==3.3.3
|
||||
xgboost==1.6.2
|
||||
tensorboard==2.10.1
|
||||
|
@ -1,14 +1,14 @@
|
||||
numpy==1.23.3
|
||||
numpy==1.23.4
|
||||
pandas==1.5.0; platform_machine != 'armv7l'
|
||||
# Piwheels doesn't have 1.5.0 yet.
|
||||
pandas==1.4.3; platform_machine == 'armv7l'
|
||||
pandas-ta==0.3.14b
|
||||
|
||||
ccxt==1.95.30
|
||||
ccxt==2.0.25
|
||||
# Pin cryptography for now due to rust build errors with piwheels
|
||||
cryptography==38.0.1
|
||||
aiohttp==3.8.3
|
||||
SQLAlchemy==1.4.41
|
||||
SQLAlchemy==1.4.42
|
||||
python-telegram-bot==13.14
|
||||
arrow==1.2.3
|
||||
cachetools==4.2.2
|
||||
@ -37,7 +37,7 @@ orjson==3.8.0
|
||||
sdnotify==0.3.2
|
||||
|
||||
# API Server
|
||||
fastapi==0.85.0
|
||||
fastapi==0.85.1
|
||||
pydantic>=1.8.0
|
||||
uvicorn==0.18.3
|
||||
pyjwt==2.5.0
|
||||
|
@ -18,6 +18,7 @@ from freqtrade.commands import (start_backtesting_show, start_convert_data, star
|
||||
from freqtrade.commands.db_commands import start_convert_db
|
||||
from freqtrade.commands.deploy_commands import (clean_ui_subdir, download_and_install_ui,
|
||||
get_ui_download_url, read_ui_version)
|
||||
from freqtrade.commands.list_commands import start_list_freqAI_models
|
||||
from freqtrade.configuration import setup_utils_configuration
|
||||
from freqtrade.enums import RunMode
|
||||
from freqtrade.exceptions import OperationalException
|
||||
@ -944,6 +945,34 @@ def test_start_list_strategies(capsys):
|
||||
assert str(Path("broken_strats/broken_futures_strategies.py")) in captured.out
|
||||
|
||||
|
||||
def test_start_list_freqAI_models(capsys):
|
||||
|
||||
args = [
|
||||
"list-freqaimodels",
|
||||
"-1"
|
||||
]
|
||||
pargs = get_args(args)
|
||||
pargs['config'] = None
|
||||
start_list_freqAI_models(pargs)
|
||||
captured = capsys.readouterr()
|
||||
assert "LightGBMClassifier" in captured.out
|
||||
assert "LightGBMRegressor" in captured.out
|
||||
assert "XGBoostRegressor" in captured.out
|
||||
assert "<builtin>/LightGBMRegressor.py" not in captured.out
|
||||
|
||||
args = [
|
||||
"list-freqaimodels",
|
||||
]
|
||||
pargs = get_args(args)
|
||||
pargs['config'] = None
|
||||
start_list_freqAI_models(pargs)
|
||||
captured = capsys.readouterr()
|
||||
assert "LightGBMClassifier" in captured.out
|
||||
assert "LightGBMRegressor" in captured.out
|
||||
assert "XGBoostRegressor" in captured.out
|
||||
assert "<builtin>/LightGBMRegressor.py" in captured.out
|
||||
|
||||
|
||||
def test_start_test_pairlist(mocker, caplog, tickers, default_conf, capsys):
|
||||
patch_exchange(mocker, mock_markets=True)
|
||||
mocker.patch.multiple('freqtrade.exchange.Exchange',
|
||||
|
@ -2196,6 +2196,9 @@ def test_refresh_latest_ohlcv_cache(mocker, default_conf, candle_type, time_mach
|
||||
time_machine.move_to(start + timedelta(hours=99, minutes=30))
|
||||
|
||||
exchange = get_patched_exchange(mocker, default_conf)
|
||||
mocker.patch("freqtrade.exchange.Exchange.ohlcv_candle_limit", return_value=100)
|
||||
assert exchange._startup_candle_count == 0
|
||||
|
||||
exchange._api_async.fetch_ohlcv = get_mock_coro(ohlcv)
|
||||
pair1 = ('IOTA/ETH', '1h', candle_type)
|
||||
pair2 = ('XRP/ETH', '1h', candle_type)
|
||||
@ -2236,30 +2239,36 @@ def test_refresh_latest_ohlcv_cache(mocker, default_conf, candle_type, time_mach
|
||||
assert len(res) == 2
|
||||
assert len(res[pair1]) == 99
|
||||
assert len(res[pair2]) == 99
|
||||
assert res[pair2].at[0, 'open']
|
||||
assert exchange._pairs_last_refresh_time[pair1] == ohlcv[-1][0] // 1000
|
||||
refresh_pior = exchange._pairs_last_refresh_time[pair1]
|
||||
|
||||
# New candle on exchange - only return 50 candles (but one candle further)
|
||||
new_startdate = (start + timedelta(hours=51)).strftime('%Y-%m-%d %H:%M')
|
||||
ohlcv = generate_test_data_raw('1h', 50, new_startdate)
|
||||
# New candle on exchange - return 100 candles - but skip one candle so we actually get 2 candles
|
||||
# in one go
|
||||
new_startdate = (start + timedelta(hours=2)).strftime('%Y-%m-%d %H:%M')
|
||||
# mocker.patch("freqtrade.exchange.Exchange.ohlcv_candle_limit", return_value=100)
|
||||
ohlcv = generate_test_data_raw('1h', 100, new_startdate)
|
||||
exchange._api_async.fetch_ohlcv = get_mock_coro(ohlcv)
|
||||
res = exchange.refresh_latest_ohlcv(pairs)
|
||||
assert exchange._api_async.fetch_ohlcv.call_count == 2
|
||||
assert len(res) == 2
|
||||
assert len(res[pair1]) == 100
|
||||
assert len(res[pair2]) == 100
|
||||
# Verify index starts at 0
|
||||
assert res[pair2].at[0, 'open']
|
||||
assert refresh_pior != exchange._pairs_last_refresh_time[pair1]
|
||||
|
||||
assert exchange._pairs_last_refresh_time[pair1] == ohlcv[-1][0] // 1000
|
||||
assert exchange._pairs_last_refresh_time[pair2] == ohlcv[-1][0] // 1000
|
||||
exchange._api_async.fetch_ohlcv.reset_mock()
|
||||
|
||||
# Retry same call - no action.
|
||||
# Retry same call - from cache
|
||||
res = exchange.refresh_latest_ohlcv(pairs)
|
||||
assert exchange._api_async.fetch_ohlcv.call_count == 0
|
||||
assert len(res) == 2
|
||||
assert len(res[pair1]) == 100
|
||||
assert len(res[pair2]) == 100
|
||||
assert res[pair2].at[0, 'open']
|
||||
|
||||
# Move to distant future (so a 1 call would cause a hole in the data)
|
||||
time_machine.move_to(start + timedelta(hours=2000))
|
||||
@ -2272,6 +2281,7 @@ def test_refresh_latest_ohlcv_cache(mocker, default_conf, candle_type, time_mach
|
||||
# Cache eviction - new data.
|
||||
assert len(res[pair1]) == 99
|
||||
assert len(res[pair2]) == 99
|
||||
assert res[pair2].at[0, 'open']
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@ -4341,9 +4351,10 @@ def test__fetch_and_calculate_funding_fees_datetime_called(
|
||||
('XLTCUSDT', 1, 'spot'),
|
||||
('LTC/USD', 1, 'futures'),
|
||||
('XLTCUSDT', 0.01, 'futures'),
|
||||
('ETH/USDT:USDT', 10, 'futures')
|
||||
('ETH/USDT:USDT', 10, 'futures'),
|
||||
('TORN/USDT:USDT', None, 'futures'), # Don't fail for unavailable pairs.
|
||||
])
|
||||
def est__get_contract_size(mocker, default_conf, pair, expected_size, trading_mode):
|
||||
def test__get_contract_size(mocker, default_conf, pair, expected_size, trading_mode):
|
||||
api_mock = MagicMock()
|
||||
default_conf['trading_mode'] = trading_mode
|
||||
default_conf['margin_mode'] = 'isolated'
|
||||
|
@ -30,6 +30,7 @@ def is_mac() -> bool:
|
||||
@pytest.mark.parametrize('model', [
|
||||
'LightGBMRegressor',
|
||||
'XGBoostRegressor',
|
||||
'XGBoostRFRegressor',
|
||||
'CatboostRegressor',
|
||||
])
|
||||
def test_extract_data_and_train_model_Standard(mocker, freqai_conf, model):
|
||||
@ -55,10 +56,17 @@ def test_extract_data_and_train_model_Standard(mocker, freqai_conf, model):
|
||||
|
||||
data_load_timerange = TimeRange.parse_timerange("20180125-20180130")
|
||||
new_timerange = TimeRange.parse_timerange("20180127-20180130")
|
||||
freqai.dk.set_paths('ADA/BTC', None)
|
||||
|
||||
freqai.train_timer("start", "ADA/BTC")
|
||||
freqai.extract_data_and_train_model(
|
||||
new_timerange, "ADA/BTC", strategy, freqai.dk, data_load_timerange)
|
||||
freqai.train_timer("stop", "ADA/BTC")
|
||||
freqai.dd.save_metric_tracker_to_disk()
|
||||
freqai.dd.save_drawer_to_disk()
|
||||
|
||||
assert Path(freqai.dk.full_path / "metric_tracker.json").is_file()
|
||||
assert Path(freqai.dk.full_path / "pair_dictionary.json").is_file()
|
||||
assert Path(freqai.dk.data_path /
|
||||
f"{freqai.dk.model_filename}_model.{model_save_ext}").is_file()
|
||||
assert Path(freqai.dk.data_path / f"{freqai.dk.model_filename}_metadata.json").is_file()
|
||||
@ -93,6 +101,7 @@ def test_extract_data_and_train_model_MultiTargets(mocker, freqai_conf, model):
|
||||
|
||||
data_load_timerange = TimeRange.parse_timerange("20180110-20180130")
|
||||
new_timerange = TimeRange.parse_timerange("20180120-20180130")
|
||||
freqai.dk.set_paths('ADA/BTC', None)
|
||||
|
||||
freqai.extract_data_and_train_model(
|
||||
new_timerange, "ADA/BTC", strategy, freqai.dk, data_load_timerange)
|
||||
@ -111,6 +120,7 @@ def test_extract_data_and_train_model_MultiTargets(mocker, freqai_conf, model):
|
||||
'LightGBMClassifier',
|
||||
'CatboostClassifier',
|
||||
'XGBoostClassifier',
|
||||
'XGBoostRFClassifier',
|
||||
])
|
||||
def test_extract_data_and_train_model_Classifiers(mocker, freqai_conf, model):
|
||||
if is_arm() and model == 'CatboostClassifier':
|
||||
@ -134,6 +144,7 @@ def test_extract_data_and_train_model_Classifiers(mocker, freqai_conf, model):
|
||||
|
||||
data_load_timerange = TimeRange.parse_timerange("20180110-20180130")
|
||||
new_timerange = TimeRange.parse_timerange("20180120-20180130")
|
||||
freqai.dk.set_paths('ADA/BTC', None)
|
||||
|
||||
freqai.extract_data_and_train_model(new_timerange, "ADA/BTC",
|
||||
strategy, freqai.dk, data_load_timerange)
|
||||
|
@ -97,7 +97,6 @@ def _make_backtest_conf(mocker, datadir, conf=None, pair='UNITTEST/BTC'):
|
||||
'start_date': min_date,
|
||||
'end_date': max_date,
|
||||
'max_open_trades': 10,
|
||||
'position_stacking': False,
|
||||
}
|
||||
|
||||
|
||||
@ -735,7 +734,6 @@ def test_backtest_one(default_conf, fee, mocker, testdatadir) -> None:
|
||||
start_date=min_date,
|
||||
end_date=max_date,
|
||||
max_open_trades=10,
|
||||
position_stacking=False,
|
||||
)
|
||||
results = result['results']
|
||||
assert not results.empty
|
||||
@ -799,6 +797,34 @@ def test_backtest_one(default_conf, fee, mocker, testdatadir) -> None:
|
||||
t["close_rate"], 6) < round(ln.iloc[0]["high"], 6))
|
||||
|
||||
|
||||
def test_backtest_timedout_entry_orders(default_conf, fee, mocker, testdatadir) -> None:
|
||||
# This strategy intentionally places unfillable orders.
|
||||
default_conf['strategy'] = 'StrategyTestV3CustomEntryPrice'
|
||||
default_conf['startup_candle_count'] = 0
|
||||
# Cancel unfilled order after 4 minutes on 5m timeframe.
|
||||
default_conf["unfilledtimeout"] = {"entry": 4}
|
||||
mocker.patch('freqtrade.exchange.Exchange.get_fee', fee)
|
||||
mocker.patch("freqtrade.exchange.Exchange.get_min_pair_stake_amount", return_value=0.00001)
|
||||
mocker.patch("freqtrade.exchange.Exchange.get_max_pair_stake_amount", return_value=float('inf'))
|
||||
patch_exchange(mocker)
|
||||
backtesting = Backtesting(default_conf)
|
||||
backtesting._set_strategy(backtesting.strategylist[0])
|
||||
# Testing dataframe contains 11 candles. Expecting 10 timed out orders.
|
||||
timerange = TimeRange('date', 'date', 1517227800, 1517231100)
|
||||
data = history.load_data(datadir=testdatadir, timeframe='5m', pairs=['UNITTEST/BTC'],
|
||||
timerange=timerange)
|
||||
min_date, max_date = get_timerange(data)
|
||||
|
||||
result = backtesting.backtest(
|
||||
processed=deepcopy(data),
|
||||
start_date=min_date,
|
||||
end_date=max_date,
|
||||
max_open_trades=1,
|
||||
)
|
||||
|
||||
assert result['timedout_entry_orders'] == 10
|
||||
|
||||
|
||||
def test_backtest_1min_timeframe(default_conf, fee, mocker, testdatadir) -> None:
|
||||
default_conf['use_exit_signal'] = False
|
||||
mocker.patch('freqtrade.exchange.Exchange.get_fee', fee)
|
||||
@ -819,7 +845,6 @@ def test_backtest_1min_timeframe(default_conf, fee, mocker, testdatadir) -> None
|
||||
start_date=min_date,
|
||||
end_date=max_date,
|
||||
max_open_trades=1,
|
||||
position_stacking=False,
|
||||
)
|
||||
assert not results['results'].empty
|
||||
assert len(results['results']) == 1
|
||||
@ -851,7 +876,6 @@ def test_backtest_trim_no_data_left(default_conf, fee, mocker, testdatadir) -> N
|
||||
start_date=min_date,
|
||||
end_date=max_date,
|
||||
max_open_trades=10,
|
||||
position_stacking=False,
|
||||
)
|
||||
|
||||
|
||||
@ -906,7 +930,6 @@ def test_backtest_dataprovider_analyzed_df(default_conf, fee, mocker, testdatadi
|
||||
start_date=min_date,
|
||||
end_date=max_date,
|
||||
max_open_trades=10,
|
||||
position_stacking=False,
|
||||
)
|
||||
assert count == 5
|
||||
|
||||
@ -950,8 +973,6 @@ def test_backtest_pricecontours_protections(default_conf, fee, mocker, testdatad
|
||||
start_date=min_date,
|
||||
end_date=max_date,
|
||||
max_open_trades=1,
|
||||
position_stacking=False,
|
||||
enable_protections=default_conf.get('enable_protections', False),
|
||||
)
|
||||
assert len(results['results']) == numres
|
||||
|
||||
@ -994,8 +1015,6 @@ def test_backtest_pricecontours(default_conf, fee, mocker, testdatadir,
|
||||
start_date=min_date,
|
||||
end_date=max_date,
|
||||
max_open_trades=1,
|
||||
position_stacking=False,
|
||||
enable_protections=default_conf.get('enable_protections', False),
|
||||
)
|
||||
assert len(results['results']) == expected
|
||||
|
||||
@ -1107,7 +1126,6 @@ def test_backtest_multi_pair(default_conf, fee, mocker, tres, pair, testdatadir)
|
||||
'start_date': min_date,
|
||||
'end_date': max_date,
|
||||
'max_open_trades': 3,
|
||||
'position_stacking': False,
|
||||
}
|
||||
|
||||
results = backtesting.backtest(**backtest_conf)
|
||||
@ -1130,7 +1148,6 @@ def test_backtest_multi_pair(default_conf, fee, mocker, tres, pair, testdatadir)
|
||||
'start_date': min_date,
|
||||
'end_date': max_date,
|
||||
'max_open_trades': 1,
|
||||
'position_stacking': False,
|
||||
}
|
||||
results = backtesting.backtest(**backtest_conf)
|
||||
assert len(evaluate_result_multi(results['results'], '5m', 1)) == 0
|
||||
|
@ -42,7 +42,6 @@ def test_backtest_position_adjustment(default_conf, fee, mocker, testdatadir) ->
|
||||
start_date=min_date,
|
||||
end_date=max_date,
|
||||
max_open_trades=10,
|
||||
position_stacking=False,
|
||||
)
|
||||
results = result['results']
|
||||
assert not results.empty
|
||||
|
@ -336,7 +336,7 @@ def test_start_calls_optimizer(mocker, hyperopt_conf, capsys) -> None:
|
||||
assert hasattr(hyperopt.backtesting.strategy, "advise_entry")
|
||||
assert hasattr(hyperopt, "max_open_trades")
|
||||
assert hyperopt.max_open_trades == hyperopt_conf['max_open_trades']
|
||||
assert hasattr(hyperopt, "position_stacking")
|
||||
assert hasattr(hyperopt.backtesting, "_position_stacking")
|
||||
|
||||
|
||||
def test_hyperopt_format_results(hyperopt):
|
||||
@ -704,7 +704,7 @@ def test_simplified_interface_roi_stoploss(mocker, hyperopt_conf, capsys) -> Non
|
||||
assert hasattr(hyperopt.backtesting.strategy, "advise_entry")
|
||||
assert hasattr(hyperopt, "max_open_trades")
|
||||
assert hyperopt.max_open_trades == hyperopt_conf['max_open_trades']
|
||||
assert hasattr(hyperopt, "position_stacking")
|
||||
assert hasattr(hyperopt.backtesting, "_position_stacking")
|
||||
|
||||
|
||||
def test_simplified_interface_all_failed(mocker, hyperopt_conf, caplog) -> None:
|
||||
@ -778,7 +778,7 @@ def test_simplified_interface_buy(mocker, hyperopt_conf, capsys) -> None:
|
||||
assert hasattr(hyperopt.backtesting.strategy, "advise_entry")
|
||||
assert hasattr(hyperopt, "max_open_trades")
|
||||
assert hyperopt.max_open_trades == hyperopt_conf['max_open_trades']
|
||||
assert hasattr(hyperopt, "position_stacking")
|
||||
assert hasattr(hyperopt.backtesting, "_position_stacking")
|
||||
|
||||
|
||||
def test_simplified_interface_sell(mocker, hyperopt_conf, capsys) -> None:
|
||||
@ -821,7 +821,7 @@ def test_simplified_interface_sell(mocker, hyperopt_conf, capsys) -> None:
|
||||
assert hasattr(hyperopt.backtesting.strategy, "advise_entry")
|
||||
assert hasattr(hyperopt, "max_open_trades")
|
||||
assert hyperopt.max_open_trades == hyperopt_conf['max_open_trades']
|
||||
assert hasattr(hyperopt, "position_stacking")
|
||||
assert hasattr(hyperopt.backtesting, "_position_stacking")
|
||||
|
||||
|
||||
@pytest.mark.parametrize("space", [
|
||||
|
@ -1443,8 +1443,9 @@ def test_api_plot_config(botclient):
|
||||
assert isinstance(rc.json()['subplots'], dict)
|
||||
|
||||
|
||||
def test_api_strategies(botclient):
|
||||
def test_api_strategies(botclient, tmpdir):
|
||||
ftbot, client = botclient
|
||||
ftbot.config['user_data_dir'] = Path(tmpdir)
|
||||
|
||||
rc = client_get(client, f"{BASE_URI}/strategies")
|
||||
|
||||
@ -1456,6 +1457,7 @@ def test_api_strategies(botclient):
|
||||
'InformativeDecoratorTest',
|
||||
'StrategyTestV2',
|
||||
'StrategyTestV3',
|
||||
'StrategyTestV3CustomEntryPrice',
|
||||
'StrategyTestV3Futures',
|
||||
'freqai_test_classifier',
|
||||
'freqai_test_multimodel_strat',
|
||||
|
37
tests/strategy/strats/strategy_test_v3_custom_entry_price.py
Normal file
37
tests/strategy/strats/strategy_test_v3_custom_entry_price.py
Normal file
@ -0,0 +1,37 @@
|
||||
# pragma pylint: disable=missing-docstring, invalid-name, pointless-string-statement
|
||||
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
|
||||
from pandas import DataFrame
|
||||
from strategy_test_v3 import StrategyTestV3
|
||||
|
||||
|
||||
class StrategyTestV3CustomEntryPrice(StrategyTestV3):
|
||||
"""
|
||||
Strategy used by tests freqtrade bot.
|
||||
Please do not modify this strategy, it's intended for internal use only.
|
||||
Please look at the SampleStrategy in the user_data/strategy directory
|
||||
or strategy repository https://github.com/freqtrade/freqtrade-strategies
|
||||
for samples and inspiration.
|
||||
"""
|
||||
new_entry_price: float = 0.001
|
||||
|
||||
def populate_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
|
||||
return dataframe
|
||||
|
||||
def populate_entry_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
|
||||
|
||||
dataframe.loc[
|
||||
dataframe['volume'] > 0,
|
||||
'enter_long'] = 1
|
||||
|
||||
return dataframe
|
||||
|
||||
def populate_exit_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
|
||||
return dataframe
|
||||
|
||||
def custom_entry_price(self, pair: str, current_time: datetime, proposed_rate: float,
|
||||
entry_tag: Optional[str], side: str, **kwargs) -> float:
|
||||
|
||||
return self.new_entry_price
|
@ -32,24 +32,25 @@ def test_search_strategy():
|
||||
|
||||
def test_search_all_strategies_no_failed():
|
||||
directory = Path(__file__).parent / "strats"
|
||||
strategies = StrategyResolver.search_all_objects(directory, enum_failed=False)
|
||||
strategies = StrategyResolver._search_all_objects(directory, enum_failed=False)
|
||||
assert isinstance(strategies, list)
|
||||
assert len(strategies) == 9
|
||||
assert len(strategies) == 10
|
||||
assert isinstance(strategies[0], dict)
|
||||
|
||||
|
||||
def test_search_all_strategies_with_failed():
|
||||
directory = Path(__file__).parent / "strats"
|
||||
strategies = StrategyResolver.search_all_objects(directory, enum_failed=True)
|
||||
strategies = StrategyResolver._search_all_objects(directory, enum_failed=True)
|
||||
assert isinstance(strategies, list)
|
||||
assert len(strategies) == 10
|
||||
assert len(strategies) == 11
|
||||
# with enum_failed=True search_all_objects() shall find 2 good strategies
|
||||
# and 1 which fails to load
|
||||
assert len([x for x in strategies if x['class'] is not None]) == 9
|
||||
assert len([x for x in strategies if x['class'] is not None]) == 10
|
||||
|
||||
assert len([x for x in strategies if x['class'] is None]) == 1
|
||||
|
||||
directory = Path(__file__).parent / "strats_nonexistingdir"
|
||||
strategies = StrategyResolver.search_all_objects(directory, enum_failed=True)
|
||||
strategies = StrategyResolver._search_all_objects(directory, enum_failed=True)
|
||||
assert len(strategies) == 0
|
||||
|
||||
|
||||
@ -77,10 +78,9 @@ def test_load_strategy_base64(dataframe_1m, caplog, default_conf):
|
||||
|
||||
|
||||
def test_load_strategy_invalid_directory(caplog, default_conf):
|
||||
default_conf['strategy'] = 'StrategyTestV3'
|
||||
extra_dir = Path.cwd() / 'some/path'
|
||||
with pytest.raises(OperationalException):
|
||||
StrategyResolver._load_strategy(CURRENT_TEST_STRATEGY, config=default_conf,
|
||||
with pytest.raises(OperationalException, match=r"Impossible to load Strategy.*"):
|
||||
StrategyResolver._load_strategy('StrategyTestV333', config=default_conf,
|
||||
extra_dir=extra_dir)
|
||||
|
||||
assert log_has_re(r'Path .*' + r'some.*path.*' + r'.* does not exist', caplog)
|
||||
@ -102,8 +102,8 @@ def test_load_strategy_noname(default_conf):
|
||||
StrategyResolver.load_strategy(default_conf)
|
||||
|
||||
|
||||
@pytest.mark.filterwarnings("ignore:deprecated")
|
||||
@pytest.mark.parametrize('strategy_name', ['StrategyTestV2'])
|
||||
@ pytest.mark.filterwarnings("ignore:deprecated")
|
||||
@ pytest.mark.parametrize('strategy_name', ['StrategyTestV2'])
|
||||
def test_strategy_pre_v3(dataframe_1m, default_conf, strategy_name):
|
||||
default_conf.update({'strategy': strategy_name})
|
||||
|
||||
@ -349,7 +349,7 @@ def test_strategy_override_use_exit_profit_only(caplog, default_conf):
|
||||
assert log_has("Override strategy 'exit_profit_only' with value in config file: True.", caplog)
|
||||
|
||||
|
||||
@pytest.mark.filterwarnings("ignore:deprecated")
|
||||
@ pytest.mark.filterwarnings("ignore:deprecated")
|
||||
def test_missing_implements(default_conf, caplog):
|
||||
|
||||
default_location = Path(__file__).parent / "strats"
|
||||
|
@ -2406,6 +2406,8 @@ def test_Trade_object_idem():
|
||||
'get_trading_volume',
|
||||
|
||||
)
|
||||
EXCLUDES2 = ('trades', 'trades_open', 'bt_trades_open_pp', 'bt_open_open_trade_count',
|
||||
'total_profit')
|
||||
|
||||
# Parent (LocalTrade) should have the same attributes
|
||||
for item in trade:
|
||||
@ -2416,7 +2418,7 @@ def test_Trade_object_idem():
|
||||
# Fails if only a column is added without corresponding parent field
|
||||
for item in localtrade:
|
||||
if (not item.startswith('__')
|
||||
and item not in ('trades', 'trades_open', 'total_profit')
|
||||
and item not in EXCLUDES2
|
||||
and type(getattr(LocalTrade, item)) not in (property, FunctionType)):
|
||||
assert item in trade
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user