fix conflicts

This commit is contained in:
longyu 2022-07-22 17:17:57 +02:00
parent 5ad8d08b84
commit 38841e30b8
16 changed files with 838 additions and 180 deletions

View File

@ -63,6 +63,7 @@ Please find the complete documentation on the [freqtrade website](https://www.fr
- [x] **Dry-run**: Run the bot without paying money.
- [x] **Backtesting**: Run a simulation of your buy/sell strategy.
- [x] **Strategy Optimization by machine learning**: Use machine learning to optimize your buy/sell strategy parameters with real exchange data.
- [X] **Adaptive prediction modeling**: Build a smart strategy with FreqAI that self-trains to the market via adaptive machine learning methods. [Learn more](https://www.freqtrade.io/en/stable/freqai/)
- [x] **Edge position sizing** Calculate your win rate, risk reward ratio, the best stoploss and adjust your position size before taking a position for each specific market. [Learn more](https://www.freqtrade.io/en/stable/edge/).
- [x] **Whitelist crypto-currencies**: Select which crypto-currency you want to trade or use dynamic whitelists.
- [x] **Blacklist crypto-currencies**: Select which crypto-currency you want to avoid.

View File

@ -1,30 +1,26 @@
# Freqai
# FreqAI
!!! Note
Freqai is still experimental, and should be used at the user's own discretion.
Freqai is a module designed to automate a variety of tasks associated with
FreqAI is a module designed to automate a variety of tasks associated with
training a predictive model to provide signals based on input features.
Among the the features included:
* Easy large feature set construction based on simple user input
* Sweep model training and backtesting to simulate consistent model retraining through time
* Smart outlier removal of data points from prediction sets using a Dissimilarity Index.
* Data dimensionality reduction with Principal Component Analysis
* Automatic file management for storage of models to be reused during live
* Smart and safe data standardization
* Cleaning of NaNs from the data set before training and prediction.
* Automated live retraining (still VERY experimental. Proceed with caution.)
* Create large rich feature sets (10k+ features) based on simple user created strategies.
* Sweep model training and backtesting to simulate consistent model retraining through time.
* Remove outliers automatically from training and prediction sets using a Dissimilarity Index and Support Vector Machines.
* Reduce the dimensionality of the data with Principal Component Analysis.
* Store models to disk to make reloading from a crash fast and easy (and purge obsolete files automatically for sustained dry/live runs.)
* Normalize the data automatically in a smart and statistically safe way.
* Automated data download and data handling.
* Clean the incoming data and of NaNs in a safe way and before training and prediction.
* Retrain live automatically so that the model self-adapts to the market in an unsupervised manner.
## General approach
The user provides FreqAI with a set of custom indicators (created inside the strategy the same way
a typical Freqtrade strategy is created) as well as a target value (typically some price change into
the future). FreqAI trains a model to predict the target value based on the input of custom indicators.
a typical Freqtrade strategy is created) as well as a target value (typically some price change into the future). FreqAI trains a model to predict the target value based on the input of custom indicators.
FreqAI will train and save a new model for each pair in the config whitelist.
Users employ FreqAI to backtest a strategy (emulate reality with retraining a model as new data is
introduced) and run the model live to generate buy and sell signals.
Users employ FreqAI to backtest a strategy (emulate reality with retraining a model as new data is introduced) and run the model live to generate buy and sell signals. In dry/live, FreqAI works in a background thread to keep all models as updated as possible with consistent retraining.
## Background and vocabulary
@ -58,17 +54,55 @@ Use `pip` to install the prerequisites with:
## Running from the example files
An example strategy, an example prediction model, and example config can all be found in
`freqtrade/templates/ExampleFreqaiStrategy.py`,
`freqtrade/freqai/prediction_models/CatboostPredictionModel.py`,
`config_examples/config_freqai.example.json`, respectively. Assuming the user has downloaded
`freqtrade/templates/FreqaiExampleStrategy.py`,
`freqtrade/freqai/prediction_models/LightGBMPredictionModel.py`,
`config_examples/config_freqai_futures.example.json`, respectively. Assuming the user has downloaded
the necessary data, Freqai can be executed from these templates with:
```bash
freqtrade backtesting --config config_examples/config_freqai.example.json --strategy FreqaiExampleStrategy --freqaimodel CatboostPredictionModel --strategy-path freqtrade/templates --timerange 20220101-20220201
freqtrade backtesting --config config_examples/config_freqai.example.json --strategy FreqaiExampleStrategy --freqaimodel LightGBMPredictionModel --strategy-path freqtrade/templates --timerange 20220101-20220201
```
## Configuring the bot
The table below will list all configuration parameters available for `FreqAI`.
Mandatory parameters are marked as **Required**, which means that they are required to be set in one of the possible ways.
| Parameter | Description |
|------------|-------------|
| `freqai` | **Required.** The dictionary containing all the parameters for controlling FreqAI. <br> **Datatype:** dictionary.
| `identifier` | **Required.** A unique name for the current model. This can be reused to reload pretrained models/data. <br> **Datatype:** string.
| `train_period_days` | **Required.** Number of days to use for the training data (width of the sliding window). <br> **Datatype:** positive integer.
| `backtest_period_days` | **Required.** Number of days to inference into the trained model before sliding the window and retraining. This can be fractional days, but beware that the user provided `timerange` will be divided by this number to yield the number of trainings necessary to complete the backtest. <br> **Datatype:** Float.
| `live_retrain_hours` | Frequency of retraining during dry/live runs. Default set to 0, which means it will retrain as often as possible. **Datatype:** Float > 0.
| `follow_mode` | If true, this instance of FreqAI will look for models associated with `identifier` and load those for inferencing. A `follower` will **not** train new models. False by default. <br> **Datatype:** boolean.
| `live_trained_timestamp` | Useful if user wants to start from models trained during a *backtest*. The timestamp can be located in the `user_data/models` backtesting folder. This is not a commonly used parameter, leave undefined for most applications. <br> **Datatype:** positive integer.
| `fit_live_predictions_candles` | Computes target (label) statistics from prediction data, instead of from the training data set. Number of candles is the number of historical candles it uses to generate the statistics. <br> **Datatype:** positive integer.
| | **Feature Parameters**
| `feature_parameters` | A dictionary containing the parameters used to engineer the feature set. Details and examples shown [here](#building-the-feature-set) <br> **Datatype:** dictionary.
| `include_corr_pairlist` | A list of correlated coins that FreqAI will add as additional features to all `pair_whitelist` coins. All indicators set in `populate_any_indicators` will be created for each coin in this list, and that set of features is added to the base asset feature set. <br> **Datatype:** list of assets (strings).
| `include_timeframes` | A list of timeframes that all indicators in `populate_any_indicators` will be created for and added as features to the base asset feature set. <br> **Datatype:** list of timeframes (strings).
| `label_period_candles` | Number of candles into the future that the labels are created for. This is used in `populate_any_indicators`, refer to `templates/FreqaiExampleStrategy.py` for detailed usage. The user can create custom labels, making use of this parameter not. <br> **Datatype:** positive integer.
| `include_shifted_candles` | Parameter used to add a sense of temporal recency to flattened regression type input data. `include_shifted_candles` takes all features, duplicates and shifts them by the number indicated by user. <br> **Datatype:** positive integer.
| `DI_threshold` | Activates the Dissimilarity Index for outlier detection when above 0, explained more [here](#removing-outliers-with-the-dissimilarity-index). <br> **Datatype:** positive float (typically below 1).
| `weight_factor` | Used to set weights for training data points according to their recency, see details and a figure of how it works [here](##controlling-the-model-learning-process). <br> **Datatype:** positive float (typically below 1).
| `principal_component_analysis` | Ask FreqAI to automatically reduce the dimensionality of the data set using PCA. <br> **Datatype:** boolean.
| `use_SVM_to_remove_outliers` | Ask FreqAI to train a support vector machine to detect and remove outliers from the training data set as well as from incoming data points. <br> **Datatype:** boolean.
| `svm_nu` | The `nu` parameter for the support vector machine. *Very* broadly, this is the percentage of data points that should be considered outliers. <br> **Datatype:** float between 0 and 1.
| `stratify_training_data` | This value is used to indicate the stratification of the data. e.g. 2 would set every 2nd data point into a separate dataset to be pulled from during training/testing. <br> **Datatype:** positive integer.
| `indicator_max_period_candles` | The maximum *period* used in `populate_any_indicators()` for indicator creation. FreqAI uses this information in combination with the maximum timeframe to calculate how many data points it should download so that the first data point does not have a NaN <br> **Datatype:** positive integer.
| `indicator_periods_candles` | A list of integers used to duplicate all indicators according to a set of periods and add them to the feature set. <br> **Datatype:** list of positive integers.
| | **Data split parameters**
| `data_split_parameters` | include any additional parameters available from Scikit-learn `test_train_split()`, which are shown [here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) <br> **Datatype:** dictionary.
| `test_size` | Fraction of data that should be used for testing instead of training. <br> **Datatype:** positive float below 1.
| `shuffle` | Shuffle the training data points during training. Typically for time-series forecasting, this is set to False. **Datatype:** boolean.
| | **Model training parameters**
| `model_training_parameters` | A flexible dictionary that includes all parameters available by the user selected library. For example, if the user uses `LightGBMPredictionModel`, then this dictionary can contain any parameter available by the `LightGBMRegressor` [here](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRegressor.html). If the user selects a different model, then this dictionary can contain any parameter from that different model. <br> **Datatype:** dictionary.
| `n_estimators` | A common parameter among regressors which sets the number of boosted trees to fit <br> **Datatype:** integer.
| `learning_rate` | A common parameter among regressors which sets the boosting learning rate. <br> **Datatype:** float.
| `n_jobs`, `thread_count`, `task_type` | Different libraries use different parameter names to control the number of threads used for parallel processing or whether or not it is a `task_type` of `gpu` or `cpu`. <br> **Datatype:** float.
### Example config file
The user interface is isolated to the typical config file. A typical Freqai
@ -115,7 +149,7 @@ components/structures that the user *must* include when building their feature s
`with self.model.bridge.lock:` must be used to ensure thread safety - especially when using third
party libraries for indicator construction such as TA-lib. Another structure to consider is the
location of the labels at the bottom of the example function (below `if set_generalized_indicators:`).
This is where the user will add single features labels to their feature set to avoid duplication from
This is where the user will add single features and labels to their feature set to avoid duplication from
various configuration paramters which multiply the feature set such as `include_timeframes`.
```python
@ -213,14 +247,14 @@ a specific pair or timeframe, they should use the following structure inside `po
(as exemplified in `freqtrade/templates/FreqaiExampleStrategy.py`:
```python
def populate_any_indicators(self, metadata, pair, df, tf, informative=None, coin=""):
def populate_any_indicators(self, metadata, pair, df, tf, informative=None, coin="", set_generalized_indicators=False):
...
# Add generalized indicators here (because in live, it will call only this function to populate
# indicators for retraining). Notice how we ensure not to add them multiple times by associating
# these generalized indicators to the basepair/timeframe
if pair == metadata['pair'] and tf == self.timeframe:
if set_generalized_indicators:
df['%-day_of_week'] = (df["date"].dt.dayofweek + 1) / 7
df['%-hour_of_day'] = (df['date'].dt.hour + 1) / 25
@ -292,7 +326,7 @@ and adding this to the `train_period_days`. The units need to be in the base can
The freqai training/backtesting module can be executed with the following command:
```bash
freqtrade backtesting --strategy FreqaiExampleStrategy --config config_freqai.example.json --freqaimodel CatboostPredictionModel --timerange 20210501-20210701
freqtrade backtesting --strategy FreqaiExampleStrategy --config config_freqai_futures.example.json --freqaimodel LightGBMPredictionModel --timerange 20210501-20210701
```
If this command has never been executed with the existing config file, then it will train a new model
@ -334,32 +368,15 @@ The Freqai strategy requires the user to include the following lines of code in
def populate_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
self.freqai_info = self.config["freqai"]
self.pair = metadata["pair"]
sgi = True
# the following loops are necessary for building the features
# indicated by the user in the configuration file.
# All indicators must be populated by populate_any_indicators() for live functionality
# to work correctly.
for tf in self.freqai_info["feature_parameters"]["include_timeframes"]:
dataframe = self.populate_any_indicators(
metadata,
self.pair,
dataframe.copy(),
tf,
coin=self.pair.split("/")[0] + "-",
set_generalized_indicators=sgi,
)
sgi = False
for pair in self.freqai_info["feature_parameters"]["include_corr_pairlist"]:
if metadata["pair"] in pair:
continue # do not include whitelisted pair twice if it is in corr_pairlist
dataframe = self.populate_any_indicators(
metadata, pair, dataframe.copy(), tf, coin=pair.split("/")[0] + "-"
)
# the model will return 4 values, its prediction, an indication of whether or not the
# prediction should be accepted, the target mean/std values from the labels used during
# each training period.
# the model will return all labels created by user in `populate_any_indicators`
# (& appended targets), an indication of whether or not the prediction should be accepted,
# the target mean/std values for each of the labels created by user in
# `populate_any_indicators()` for each training period.
dataframe = self.model.bridge.start(dataframe, metadata, self)
return dataframe
@ -370,7 +387,7 @@ the feature set with a proper naming convention for the IFreqaiModel to use late
### Building an IFreqaiModel
Freqai has an example prediction model based on the popular `Catboost` regression (`freqai/prediction_models/CatboostPredictionModel.py`). However, users can customize and create
FreqAI has multiple example prediction model based libraries such as `Catboost` regression (`freqai/prediction_models/CatboostPredictionModel.py`) and `LightGBM` regression. However, users can customize and create
their own prediction models using the `IFreqaiModel` class. Users are encouraged to inherit `train()` and `predict()` to let them customize various aspects of their training procedures.
### Running the model live
@ -443,7 +460,7 @@ $\overline{d}$ quantifies the spread of the training data, which is compared to
the distance between the new prediction feature vectors, $X_k$ and all the training
data:
$$ d_k = \argmin_i d_{k,i} $$
$$ d_k = \arg \min d_{k,i} $$
which enables the estimation of a Dissimilarity Index:
@ -635,6 +652,14 @@ below this value. An example usage in the strategy may look something like:
## Additional information
### Common pitfalls
FreqAI cannot be combined with `VolumePairlists` (or any pairlist filter that adds and removes pairs dynamically).
This is for performance reasons - FreqAI relies on making quick predictions/retrains. To do this effectively,
it needs to download all the training data at the beginning of a dry/live instance. FreqAI stores and appends
new candles automatically for future retrains. But this means that if new pairs arrive later in the dry run due
to a volume pairlist, it will not have the data ready. FreqAI does work, however, with the `ShufflePairlist`.
### Feature normalization
The feature set created by the user is automatically normalized to the training

View File

@ -301,7 +301,7 @@ class FreqaiDataDrawer:
model_folders = [x for x in self.full_path.iterdir() if x.is_dir()]
pattern = re.compile(r"sub-train-(\w+)(\d{10})")
pattern = re.compile(r"sub-train-(\w+)_(\d{10})")
delete_dict: Dict[str, Any] = {}

View File

@ -88,7 +88,8 @@ class FreqaiDataKitchen:
)
self.data_path = Path(
self.full_path / str("sub-train" + "-" + pair.split("/")[0] + str(trained_timestamp))
self.full_path
/ str("sub-train" + "-" + pair.split("/")[0] + "_" + str(trained_timestamp))
)
return
@ -179,6 +180,7 @@ class FreqaiDataKitchen:
model = load(self.data_path / str(self.model_filename + "_model.joblib"))
else:
from tensorflow import keras
model = keras.models.load_model(self.data_path / str(self.model_filename + "_model.h5"))
if Path(self.data_path / str(self.model_filename + "_svm_model.joblib")).resolve().exists():
@ -410,6 +412,10 @@ class FreqaiDataKitchen:
bt_split: the backtesting length (dats). Specified in user configuration file
"""
if not isinstance(train_split, int) or train_split < 1:
raise OperationalException(
"train_period_days must be an integer greater than 0. " f"Got {train_split}."
)
train_period_days = train_split * SECONDS_IN_DAY
bt_period = bt_split * SECONDS_IN_DAY
@ -561,8 +567,10 @@ class FreqaiDataKitchen:
"""
if self.keras:
logger.warning("SVM outlier removal not currently supported for Keras based models. "
"Skipping user requested function.")
logger.warning(
"SVM outlier removal not currently supported for Keras based models. "
"Skipping user requested function."
)
if predict:
self.do_predict = np.ones(len(self.data_dictionary["prediction_features"]))
return
@ -676,8 +684,7 @@ class FreqaiDataKitchen:
training than older data.
"""
wfactor = self.config["freqai"]["feature_parameters"]["weight_factor"]
weights = np.exp(
- np.arange(num_weights) / (wfactor * num_weights))[::-1]
weights = np.exp(-np.arange(num_weights) / (wfactor * num_weights))[::-1]
return weights
def append_predictions(self, predictions, do_predict, len_dataframe):
@ -685,8 +692,6 @@ class FreqaiDataKitchen:
Append backtest prediction from current backtest period to all previous periods
"""
# ones = np.ones(len(predictions))
# target_mean, target_std = ones * self.data["target_mean"], ones * self.data["target_std"]
self.append_df = DataFrame()
for label in self.label_list:
self.append_df[label] = predictions[label]
@ -702,13 +707,6 @@ class FreqaiDataKitchen:
else:
self.full_df = pd.concat([self.full_df, self.append_df], axis=0)
# self.full_predictions = np.append(self.full_predictions, predictions)
# self.full_do_predict = np.append(self.full_do_predict, do_predict)
# if self.freqai_config.get("feature_parameters", {}).get("DI_threshold", 0) > 0:
# self.full_DI_values = np.append(self.full_DI_values, self.DI_values)
# self.full_target_mean = np.append(self.full_target_mean, target_mean)
# self.full_target_std = np.append(self.full_target_std, target_std)
return
def fill_predictions(self, dataframe):
@ -729,25 +727,34 @@ class FreqaiDataKitchen:
self.append_df = DataFrame()
self.full_df = DataFrame()
# self.full_predictions = np.append(filler, self.full_predictions)
# self.full_do_predict = np.append(filler, self.full_do_predict)
# if self.freqai_config.get("feature_parameters", {}).get("DI_threshold", 0) > 0:
# self.full_DI_values = np.append(filler, self.full_DI_values)
# self.full_target_mean = np.append(filler, self.full_target_mean)
# self.full_target_std = np.append(filler, self.full_target_std)
return
def create_fulltimerange(self, backtest_tr: str, backtest_period_days: int) -> str:
if not isinstance(backtest_period_days, int):
raise OperationalException("backtest_period_days must be an integer")
if backtest_period_days < 0:
raise OperationalException("backtest_period_days must be positive")
backtest_timerange = TimeRange.parse_timerange(backtest_tr)
if backtest_timerange.stopts == 0:
backtest_timerange.stopts = int(
datetime.datetime.now(tz=datetime.timezone.utc).timestamp()
)
# typically open ended time ranges do work, however, there are some edge cases where
# it does not. accomodating these kinds of edge cases just to allow open-ended
# timerange is not high enough priority to warrant the effort. It is safer for now
# to simply ask user to add their end date
raise OperationalException("FreqAI backtesting does not allow open ended timeranges. "
"Please indicate the end date of your desired backtesting. "
"timerange.")
# backtest_timerange.stopts = int(
# datetime.datetime.now(tz=datetime.timezone.utc).timestamp()
# )
backtest_timerange.startts = (backtest_timerange.startts
- backtest_period_days * SECONDS_IN_DAY)
backtest_timerange.startts = (
backtest_timerange.startts - backtest_period_days * SECONDS_IN_DAY
)
start = datetime.datetime.utcfromtimestamp(backtest_timerange.startts)
stop = datetime.datetime.utcfromtimestamp(backtest_timerange.stopts)
full_timerange = start.strftime("%Y%m%d") + "-" + stop.strftime("%Y%m%d")
@ -793,8 +800,9 @@ class FreqaiDataKitchen:
data_load_timerange = TimeRange()
# find the max indicator length required
max_timeframe_chars = self.freqai_config.get(
"feature_parameters", {}).get("include_timeframes")[-1]
max_timeframe_chars = self.freqai_config.get("feature_parameters", {}).get(
"include_timeframes"
)[-1]
max_period = self.freqai_config.get("feature_parameters", {}).get(
"indicator_max_period_candles", 50
)
@ -861,35 +869,11 @@ class FreqaiDataKitchen:
coin, _ = pair.split("/")
self.data_path = Path(
self.full_path
/ str("sub-train" + "-" + pair.split("/")[0] + str(int(trained_timerange.stopts)))
/ str("sub-train" + "-" + pair.split("/")[0] + "_" + str(int(trained_timerange.stopts)))
)
self.model_filename = "cb_" + coin.lower() + "_" + str(int(trained_timerange.stopts))
# self.freqai_config['live_trained_timerange'] = str(int(trained_timerange.stopts))
# enables persistence, but not fully implemented into save/load data yer
# self.data['live_trained_timerange'] = str(int(trained_timerange.stopts))
# SUPERCEDED
# def download_new_data_for_retraining(self, timerange: TimeRange, metadata: dict,
# strategy: IStrategy) -> None:
# exchange = ExchangeResolver.load_exchange(self.config['exchange']['name'],
# self.config, validate=False, freqai=True)
# # exchange = strategy.dp._exchange # closes ccxt session
# pairs = copy.deepcopy(self.freqai_config.get('corr_pairlist', []))
# if str(metadata['pair']) not in pairs:
# pairs.append(str(metadata['pair']))
# refresh_backtest_ohlcv_data(
# exchange, pairs=pairs, timeframes=self.freqai_config.get('timeframes'),
# datadir=self.config['datadir'], timerange=timerange,
# new_pairs_days=self.config['new_pairs_days'],
# erase=False, data_format=self.config.get('dataformat_ohlcv', 'json'),
# trading_mode=self.config.get('trading_mode', 'spot'),
# prepend=self.config.get('prepend_data', False)
# )
def download_all_data_for_training(self, timerange: TimeRange) -> None:
"""
Called only once upon start of bot to download the necessary data for
@ -969,8 +953,9 @@ class FreqaiDataKitchen:
def set_all_pairs(self) -> None:
self.all_pairs = copy.deepcopy(self.freqai_config.get(
'feature_parameters', {}).get('include_corr_pairlist', []))
self.all_pairs = copy.deepcopy(
self.freqai_config.get("feature_parameters", {}).get("include_corr_pairlist", [])
)
for pair in self.config.get("exchange", "").get("pair_whitelist"):
if pair not in self.all_pairs:
self.all_pairs.append(pair)
@ -1014,8 +999,9 @@ class FreqaiDataKitchen:
corr_dataframes: Dict[Any, Any] = {}
base_dataframes: Dict[Any, Any] = {}
historic_data = self.dd.historic_data
pairs = self.freqai_config.get('feature_parameters', {}).get(
'include_corr_pairlist', [])
pairs = self.freqai_config.get("feature_parameters", {}).get(
"include_corr_pairlist", []
)
for tf in self.freqai_config.get("feature_parameters", {}).get("include_timeframes"):
base_dataframes[tf] = self.slice_dataframe(timerange, historic_data[pair][tf])
@ -1031,40 +1017,13 @@ class FreqaiDataKitchen:
return corr_dataframes, base_dataframes
# SUPERCEDED
# def load_pairs_histories(self, timerange: TimeRange, metadata: dict) -> Tuple[Dict[Any, Any],
# DataFrame]:
# corr_dataframes: Dict[Any, Any] = {}
# base_dataframes: Dict[Any, Any] = {}
# pairs = self.freqai_config.get('include_corr_pairlist', []) # + [metadata['pair']]
# # timerange = TimeRange.parse_timerange(new_timerange)
# for tf in self.freqai_config.get('timeframes'):
# base_dataframes[tf] = load_pair_history(datadir=self.config['datadir'],
# timeframe=tf,
# pair=metadata['pair'], timerange=timerange,
# data_format=self.config.get(
# 'dataformat_ohlcv', 'json'),
# candle_type=self.config.get(
# 'trading_mode', 'spot'))
# if pairs:
# for p in pairs:
# if metadata['pair'] in p:
# continue # dont repeat anything from whitelist
# if p not in corr_dataframes:
# corr_dataframes[p] = {}
# corr_dataframes[p][tf] = load_pair_history(datadir=self.config['datadir'],
# timeframe=tf,
# pair=p, timerange=timerange,
# data_format=self.config.get(
# 'dataformat_ohlcv', 'json'),
# candle_type=self.config.get(
# 'trading_mode', 'spot'))
# return corr_dataframes, base_dataframes
def use_strategy_to_populate_indicators(
self, strategy: IStrategy, corr_dataframes: dict, base_dataframes: dict, pair: str
self,
strategy: IStrategy,
corr_dataframes: dict = {},
base_dataframes: dict = {},
pair: str = "",
prediction_dataframe: DataFrame = pd.DataFrame(),
) -> DataFrame:
"""
Use the user defined strategy for populating indicators during
@ -1079,16 +1038,31 @@ class FreqaiDataKitchen:
:returns:
dataframe: DataFrame = dataframe containing populated indicators
"""
dataframe = base_dataframes[self.config["timeframe"]].copy()
pairs = self.freqai_config.get('feature_parameters', {}).get('include_corr_pairlist', [])
# for prediction dataframe creation, we let dataprovider handle everything in the strategy
# so we create empty dictionaries, which allows us to pass None to
# `populate_any_indicators()`. Signaling we want the dp to give us the live dataframe.
tfs = self.freqai_config.get("feature_parameters", {}).get("include_timeframes")
pairs = self.freqai_config.get("feature_parameters", {}).get("include_corr_pairlist", [])
if not prediction_dataframe.empty:
dataframe = prediction_dataframe.copy()
for tf in tfs:
base_dataframes[tf] = None
for p in pairs:
if p not in corr_dataframes:
corr_dataframes[p] = {}
corr_dataframes[p][tf] = None
else:
dataframe = base_dataframes[self.config["timeframe"]].copy()
sgi = True
for tf in self.freqai_config.get("feature_parameters", {}).get("include_timeframes"):
for tf in tfs:
dataframe = strategy.populate_any_indicators(
pair,
pair,
dataframe.copy(),
tf,
base_dataframes[tf],
informative=base_dataframes[tf],
coin=pair.split("/")[0] + "-",
set_generalized_indicators=sgi,
)
@ -1102,7 +1076,7 @@ class FreqaiDataKitchen:
i,
dataframe.copy(),
tf,
corr_dataframes[i][tf],
informative=corr_dataframes[i][tf],
coin=i.split("/")[0] + "-",
)
@ -1113,7 +1087,8 @@ class FreqaiDataKitchen:
Fit the labels with a gaussian distribution
"""
import scipy as spy
num_candles = self.freqai_config.get('fit_live_predictions_candles', 100)
num_candles = self.freqai_config.get("fit_live_predictions_candles", 100)
self.data["labels_mean"], self.data["labels_std"] = {}, {}
for label in self.label_list:
f = spy.stats.norm.fit(self.dd.historic_predictions[self.pair][label].tail(num_candles))

View File

@ -73,6 +73,8 @@ class IFreqaiModel(ABC):
self.freqai_info["feature_parameters"]["DI_threshold"] = 0
logger.warning("DI threshold is not configured for Keras models yet. Deactivating.")
self.CONV_WIDTH = self.freqai_info.get("conv_width", 2)
self.pair_it = 0
self.total_pairs = len(self.config.get("exchange", {}).get("pair_whitelist"))
def assert_config(self, config: Dict[str, Any]) -> None:
@ -106,6 +108,10 @@ class IFreqaiModel(ABC):
elif not self.follow_mode:
self.dk = FreqaiDataKitchen(self.config, self.dd, self.live, metadata["pair"])
logger.info(f"Training {len(self.dk.training_timeranges)} timeranges")
dataframe = self.dk.use_strategy_to_populate_indicators(
strategy, prediction_dataframe=dataframe, pair=metadata["pair"]
)
dk = self.start_backtesting(dataframe, metadata, self.dk)
dataframe = self.remove_features_from_df(dk.return_dataframe)
@ -160,6 +166,8 @@ class IFreqaiModel(ABC):
dk: FreqaiDataKitchen = Data management/analysis tool assoicated to present pair only
"""
self.pair_it += 1
train_it = 0
# Loop enforcing the sliding window training/backtesting paradigm
# tr_train is the training time range e.g. 1 historical month
# tr_backtest is the backtesting time range e.g. the week directly
@ -167,22 +175,26 @@ class IFreqaiModel(ABC):
# entire backtest
for tr_train, tr_backtest in zip(dk.training_timeranges, dk.backtesting_timeranges):
(_, _, _, _) = self.dd.get_pair_dict_info(metadata["pair"])
train_it += 1
total_trains = len(dk.backtesting_timeranges)
gc.collect()
dk.data = {} # clean the pair specific data between training window sliding
self.training_timerange = tr_train
# self.training_timerange_timerange = tr_train
dataframe_train = dk.slice_dataframe(tr_train, dataframe)
dataframe_backtest = dk.slice_dataframe(tr_backtest, dataframe)
trained_timestamp = tr_train # TimeRange.parse_timerange(tr_train)
trained_timestamp = tr_train
tr_train_startts_str = datetime.datetime.utcfromtimestamp(tr_train.startts).strftime(
"%Y-%m-%d %H:%M:%S"
)
tr_train_stopts_str = datetime.datetime.utcfromtimestamp(tr_train.stopts).strftime(
"%Y-%m-%d %H:%M:%S"
)
logger.info("Training %s", metadata["pair"])
logger.info(f"Training {tr_train_startts_str} to {tr_train_stopts_str}")
logger.info(
f"Training {metadata['pair']}, {self.pair_it}/{self.total_pairs} pairs"
f" from {tr_train_startts_str} to {tr_train_stopts_str}, {train_it}/{total_trains} "
"trains"
)
dk.data_path = Path(
dk.full_path
@ -190,6 +202,7 @@ class IFreqaiModel(ABC):
"sub-train"
+ "-"
+ metadata["pair"].split("/")[0]
+ "_"
+ str(int(trained_timestamp.stopts))
)
)
@ -281,6 +294,10 @@ class IFreqaiModel(ABC):
# load the model and associated data into the data kitchen
self.model = dk.load_data(coin=metadata["pair"])
dataframe = self.dk.use_strategy_to_populate_indicators(
strategy, prediction_dataframe=dataframe, pair=metadata["pair"]
)
if not self.model:
logger.warning(
f"No model ready for {metadata['pair']}, returning null values to strategy."

View File

@ -171,32 +171,15 @@ class FreqaiExampleStrategy(IStrategy):
def populate_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
self.freqai_info = self.config["freqai"]
self.pair = metadata["pair"]
sgi = True
# the following loops are necessary for building the features
# indicated by the user in the configuration file.
# All indicators must be populated by populate_any_indicators() for live functionality
# to work correctly.
for tf in self.freqai_info["feature_parameters"]["include_timeframes"]:
dataframe = self.populate_any_indicators(
metadata,
self.pair,
dataframe.copy(),
tf,
coin=self.pair.split("/")[0] + "-",
set_generalized_indicators=sgi,
)
sgi = False
for pair in self.freqai_info["feature_parameters"]["include_corr_pairlist"]:
if metadata["pair"] in pair:
continue # do not include whitelisted pair twice if it is in corr_pairlist
dataframe = self.populate_any_indicators(
metadata, pair, dataframe.copy(), tf, coin=pair.split("/")[0] + "-"
)
# the model will return 4 values, its prediction, an indication of whether or not the
# prediction should be accepted, the target mean/std values from the labels used during
# each training period.
# the model will return all labels created by user in `populate_any_indicators`
# (& appended targets), an indication of whether or not the prediction should be accepted,
# the target mean/std values for each of the labels created by user in
# `populate_any_indicators()` for each training period.
dataframe = self.model.bridge.start(dataframe, metadata, self)
dataframe["target_roi"] = dataframe["&-s_close_mean"] + dataframe["&-s_close_std"] * 1.25

View File

@ -35,7 +35,7 @@ nav:
- Edge Positioning: edge.md
- Advanced Strategy: strategy-advanced.md
- Advanced Hyperopt: advanced-hyperopt.md
- Freqai: freqai.md
- FreqAI: freqai.md
- Sandbox Testing: sandbox-testing.md
- FAQ: faq.md
- SQL Cheat-sheet: sql_cheatsheet.md

View File

@ -2,6 +2,7 @@
-r requirements.txt
-r requirements-plot.txt
-r requirements-hyperopt.txt
-r requirements-freqai.txt
-r docs/requirements-docs.txt
coveralls==3.3.1

View File

@ -2,7 +2,7 @@
-r requirements.txt
# Required for freqai
scikit-learn==1.0.2
scikit-learn==1.1.1
scikit-optimize==0.9.0
joblib==1.1.0
catboost==1.0.4

View File

@ -77,7 +77,15 @@ function updateenv() {
fi
fi
${PYTHON} -m pip install --upgrade -r ${REQUIREMENTS} ${REQUIREMENTS_HYPEROPT} ${REQUIREMENTS_PLOT}
REQUIREMENTS_FREQAI=""
read -p "Do you want to install dependencies for freqai [y/N]? "
dev=$REPLY
if [[ $REPLY =~ ^[Yy]$ ]]
then
REQUIREMENTS_FREQAI="-r requirements-freqai.txt"
fi
${PYTHON} -m pip install --upgrade -r ${REQUIREMENTS} ${REQUIREMENTS_HYPEROPT} ${REQUIREMENTS_PLOT} ${REQUIREMENTS_FREQAI}
if [ $? -ne 0 ]; then
echo "Failed installing dependencies"
exit 1

117
tests/freqai/conftest.py Normal file
View File

@ -0,0 +1,117 @@
from copy import deepcopy
from pathlib import Path
from unittest.mock import MagicMock
from freqtrade.configuration import TimeRange
from freqtrade.data.dataprovider import DataProvider
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
from freqtrade.resolvers import StrategyResolver
from freqtrade.resolvers.freqaimodel_resolver import FreqaiModelResolver
from tests.conftest import get_patched_exchange
# @pytest.fixture(scope="function")
def freqai_conf(default_conf):
freqaiconf = deepcopy(default_conf)
freqaiconf.update(
{
"datadir": Path(default_conf["datadir"]),
"strategy": "freqai_test_strat",
"strategy-path": "freqtrade/tests/strategy/strats",
"freqaimodel": "LightGBMPredictionModel",
"freqaimodel_path": "freqai/prediction_models",
"timerange": "20180110-20180115",
"freqai": {
"startup_candles": 10000,
"purge_old_models": True,
"train_period_days": 5,
"backtest_period_days": 2,
"live_retrain_hours": 0,
"expiration_hours": 1,
"identifier": "uniqe-id100",
"live_trained_timestamp": 0,
"feature_parameters": {
"include_timeframes": ["5m"],
"include_corr_pairlist": ["ADA/BTC", "DASH/BTC"],
"label_period_candles": 20,
"include_shifted_candles": 1,
"DI_threshold": 0.9,
"weight_factor": 0.9,
"principal_component_analysis": False,
"use_SVM_to_remove_outliers": True,
"stratify_training_data": 0,
"indicator_max_period_candles": 10,
"indicator_periods_candles": [10],
},
"data_split_parameters": {"test_size": 0.33, "random_state": 1},
"model_training_parameters": {"n_estimators": 100, "verbosity": 0},
},
"config_files": [Path('config_examples', 'config_freqai_futures.example.json')]
}
)
freqaiconf['exchange'].update({'pair_whitelist': ['ADA/BTC', 'DASH/BTC', 'ETH/BTC', 'LTC/BTC']})
return freqaiconf
def get_patched_data_kitchen(mocker, freqaiconf):
dd = mocker.patch('freqtrade.freqai.data_drawer', MagicMock())
dk = FreqaiDataKitchen(freqaiconf, dd)
return dk
def get_patched_freqai_strategy(mocker, freqaiconf):
strategy = StrategyResolver.load_strategy(freqaiconf)
strategy.bot_start()
return strategy
def get_patched_freqaimodel(mocker, freqaiconf):
freqaimodel = FreqaiModelResolver.load_freqaimodel(freqaiconf)
return freqaimodel
def get_freqai_live_analyzed_dataframe(mocker, freqaiconf):
strategy = get_patched_freqai_strategy(mocker, freqaiconf)
exchange = get_patched_exchange(mocker, freqaiconf)
strategy.dp = DataProvider(freqaiconf, exchange)
freqai = strategy.model.bridge
freqai.live = True
freqai.dk = FreqaiDataKitchen(freqaiconf, freqai.dd)
timerange = TimeRange.parse_timerange("20180110-20180114")
freqai.dk.load_all_pair_histories(timerange)
strategy.analyze_pair('ADA/BTC', '5m')
return strategy.dp.get_analyzed_dataframe('ADA/BTC', '5m')
def get_freqai_analyzed_dataframe(mocker, freqaiconf):
strategy = get_patched_freqai_strategy(mocker, freqaiconf)
exchange = get_patched_exchange(mocker, freqaiconf)
strategy.dp = DataProvider(freqaiconf, exchange)
strategy.freqai_info = freqaiconf.get("freqai", {})
freqai = strategy.model.bridge
freqai.live = True
freqai.dk = FreqaiDataKitchen(freqaiconf, freqai.dd)
timerange = TimeRange.parse_timerange("20180110-20180114")
freqai.dk.load_all_pair_histories(timerange)
sub_timerange = TimeRange.parse_timerange("20180111-20180114")
corr_df, base_df = freqai.dk.get_base_and_corr_dataframes(sub_timerange, "LTC/BTC")
return freqai.dk.use_strategy_to_populate_indicators(strategy, corr_df, base_df, 'LTC/BTC')
def get_ready_to_train(mocker, freqaiconf):
strategy = get_patched_freqai_strategy(mocker, freqaiconf)
exchange = get_patched_exchange(mocker, freqaiconf)
strategy.dp = DataProvider(freqaiconf, exchange)
strategy.freqai_info = freqaiconf.get("freqai", {})
freqai = strategy.model.bridge
freqai.live = True
freqai.dk = FreqaiDataKitchen(freqaiconf, freqai.dd)
timerange = TimeRange.parse_timerange("20180110-20180114")
freqai.dk.load_all_pair_histories(timerange)
sub_timerange = TimeRange.parse_timerange("20180111-20180114")
corr_df, base_df = freqai.dk.get_base_and_corr_dataframes(sub_timerange, "LTC/BTC")
return corr_df, base_df, freqai, strategy

View File

@ -0,0 +1,167 @@
# from unittest.mock import MagicMock
# from freqtrade.commands.optimize_commands import setup_optimize_configuration, start_edge
import copy
import datetime
import shutil
from pathlib import Path
import pytest
from freqtrade.configuration import TimeRange
from freqtrade.data.dataprovider import DataProvider
# from freqtrade.freqai.data_drawer import FreqaiDataDrawer
from freqtrade.exceptions import OperationalException
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
from tests.conftest import get_patched_exchange
from tests.freqai.conftest import freqai_conf, get_patched_data_kitchen, get_patched_freqai_strategy
@pytest.mark.parametrize(
"timerange, train_period_days, expected_result",
[
("20220101-20220201", 30, "20211202-20220201"),
("20220301-20220401", 15, "20220214-20220401"),
],
)
def test_create_fulltimerange(
timerange, train_period_days, expected_result, default_conf, mocker, caplog
):
dk = get_patched_data_kitchen(mocker, freqai_conf(copy.deepcopy(default_conf)))
assert dk.create_fulltimerange(timerange, train_period_days) == expected_result
shutil.rmtree(Path(dk.full_path))
def test_create_fulltimerange_incorrect_backtest_period(mocker, default_conf):
dk = get_patched_data_kitchen(mocker, freqai_conf(copy.deepcopy(default_conf)))
with pytest.raises(OperationalException, match=r"backtest_period_days must be an integer"):
dk.create_fulltimerange("20220101-20220201", 0.5)
with pytest.raises(OperationalException, match=r"backtest_period_days must be positive"):
dk.create_fulltimerange("20220101-20220201", -1)
shutil.rmtree(Path(dk.full_path))
@pytest.mark.parametrize(
"timerange, train_period_days, backtest_period_days, expected_result",
[
("20220101-20220201", 30, 7, 9),
("20220101-20220201", 30, 0.5, 120),
("20220101-20220201", 10, 1, 80),
],
)
def test_split_timerange(
mocker, default_conf, timerange, train_period_days, backtest_period_days, expected_result
):
freqaiconf = freqai_conf(copy.deepcopy(default_conf))
freqaiconf.update({"timerange": "20220101-20220401"})
dk = get_patched_data_kitchen(mocker, freqaiconf)
tr_list, bt_list = dk.split_timerange(timerange, train_period_days, backtest_period_days)
assert len(tr_list) == len(bt_list) == expected_result
with pytest.raises(
OperationalException, match=r"train_period_days must be an integer greater than 0."
):
dk.split_timerange("20220101-20220201", -1, 0.5)
shutil.rmtree(Path(dk.full_path))
def test_update_historic_data(mocker, default_conf):
freqaiconf = freqai_conf(copy.deepcopy(default_conf))
strategy = get_patched_freqai_strategy(mocker, freqaiconf)
exchange = get_patched_exchange(mocker, freqaiconf)
strategy.dp = DataProvider(freqaiconf, exchange)
freqai = strategy.model.bridge
freqai.live = True
freqai.dk = FreqaiDataKitchen(freqaiconf, freqai.dd)
timerange = TimeRange.parse_timerange("20180110-20180114")
freqai.dk.load_all_pair_histories(timerange)
historic_candles = len(freqai.dd.historic_data["ADA/BTC"]["5m"])
dp_candles = len(strategy.dp.get_pair_dataframe("ADA/BTC", "5m"))
candle_difference = dp_candles - historic_candles
freqai.dk.update_historic_data(strategy)
updated_historic_candles = len(freqai.dd.historic_data["ADA/BTC"]["5m"])
assert updated_historic_candles - historic_candles == candle_difference
shutil.rmtree(Path(freqai.dk.full_path))
@pytest.mark.parametrize(
"timestamp, expected",
[
(datetime.datetime.now(tz=datetime.timezone.utc).timestamp() - 7200, True),
(datetime.datetime.now(tz=datetime.timezone.utc).timestamp(), False),
],
)
def test_check_if_model_expired(mocker, default_conf, timestamp, expected):
freqaiconf = freqai_conf(copy.deepcopy(default_conf))
dk = get_patched_data_kitchen(mocker, freqaiconf)
assert dk.check_if_model_expired(timestamp) == expected
shutil.rmtree(Path(dk.full_path))
def test_load_all_pairs_histories(mocker, default_conf):
freqaiconf = freqai_conf(copy.deepcopy(default_conf))
strategy = get_patched_freqai_strategy(mocker, freqaiconf)
exchange = get_patched_exchange(mocker, freqaiconf)
strategy.dp = DataProvider(freqaiconf, exchange)
freqai = strategy.model.bridge
freqai.live = True
freqai.dk = FreqaiDataKitchen(freqaiconf, freqai.dd)
timerange = TimeRange.parse_timerange("20180110-20180114")
freqai.dk.load_all_pair_histories(timerange)
assert len(freqai.dd.historic_data.keys()) == len(
freqaiconf.get("exchange", {}).get("pair_whitelist")
)
assert len(freqai.dd.historic_data["ADA/BTC"]) == len(
freqaiconf.get("freqai", {}).get("feature_parameters", {}).get("include_timeframes")
)
shutil.rmtree(Path(freqai.dk.full_path))
def test_get_base_and_corr_dataframes(mocker, default_conf):
freqaiconf = freqai_conf(copy.deepcopy(default_conf))
strategy = get_patched_freqai_strategy(mocker, freqaiconf)
exchange = get_patched_exchange(mocker, freqaiconf)
strategy.dp = DataProvider(freqaiconf, exchange)
freqai = strategy.model.bridge
freqai.live = True
freqai.dk = FreqaiDataKitchen(freqaiconf, freqai.dd)
timerange = TimeRange.parse_timerange("20180110-20180114")
freqai.dk.load_all_pair_histories(timerange)
sub_timerange = TimeRange.parse_timerange("20180111-20180114")
corr_df, base_df = freqai.dk.get_base_and_corr_dataframes(sub_timerange, "LTC/BTC")
num_tfs = len(
freqaiconf.get("freqai", {}).get("feature_parameters", {}).get("include_timeframes")
)
assert len(base_df.keys()) == num_tfs
assert len(corr_df.keys()) == len(
freqaiconf.get("freqai", {}).get("feature_parameters", {}).get("include_corr_pairlist")
)
assert len(corr_df["ADA/BTC"].keys()) == num_tfs
shutil.rmtree(Path(freqai.dk.full_path))
def test_use_strategy_to_populate_indicators(mocker, default_conf):
freqaiconf = freqai_conf(copy.deepcopy(default_conf))
strategy = get_patched_freqai_strategy(mocker, freqaiconf)
exchange = get_patched_exchange(mocker, freqaiconf)
strategy.dp = DataProvider(freqaiconf, exchange)
strategy.freqai_info = freqaiconf.get("freqai", {})
freqai = strategy.model.bridge
freqai.live = True
freqai.dk = FreqaiDataKitchen(freqaiconf, freqai.dd)
timerange = TimeRange.parse_timerange("20180110-20180114")
freqai.dk.load_all_pair_histories(timerange)
sub_timerange = TimeRange.parse_timerange("20180111-20180114")
corr_df, base_df = freqai.dk.get_base_and_corr_dataframes(sub_timerange, "LTC/BTC")
df = freqai.dk.use_strategy_to_populate_indicators(strategy, corr_df, base_df, 'LTC/BTC')
assert len(df.columns) == 45
shutil.rmtree(Path(freqai.dk.full_path))

View File

@ -0,0 +1,181 @@
# from unittest.mock import MagicMock
# from freqtrade.commands.optimize_commands import setup_optimize_configuration, start_edge
import copy
# import platform
import shutil
from pathlib import Path
from unittest.mock import MagicMock
from freqtrade.configuration import TimeRange
from freqtrade.data.dataprovider import DataProvider
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
from tests.conftest import get_patched_exchange, log_has_re
from tests.freqai.conftest import freqai_conf, get_patched_freqai_strategy
def test_train_model_in_series_LightGBM(mocker, default_conf):
freqaiconf = freqai_conf(copy.deepcopy(default_conf))
freqaiconf.update({"timerange": "20180110-20180130"})
strategy = get_patched_freqai_strategy(mocker, freqaiconf)
exchange = get_patched_exchange(mocker, freqaiconf)
strategy.dp = DataProvider(freqaiconf, exchange)
strategy.freqai_info = freqaiconf.get("freqai", {})
freqai = strategy.model.bridge
freqai.live = True
freqai.dk = FreqaiDataKitchen(freqaiconf, freqai.dd)
timerange = TimeRange.parse_timerange("20180110-20180130")
freqai.dk.load_all_pair_histories(timerange)
freqai.dd.pair_dict = MagicMock()
data_load_timerange = TimeRange.parse_timerange("20180110-20180130")
new_timerange = TimeRange.parse_timerange("20180120-20180130")
freqai.train_model_in_series(new_timerange, "ADA/BTC", strategy, freqai.dk, data_load_timerange)
assert (
Path(freqai.dk.data_path / str(freqai.dk.model_filename + "_model.joblib"))
.resolve()
.exists()
)
assert (
Path(freqai.dk.data_path / str(freqai.dk.model_filename + "_metadata.json"))
.resolve()
.exists()
)
assert (
Path(freqai.dk.data_path / str(freqai.dk.model_filename + "_trained_df.pkl"))
.resolve()
.exists()
)
assert (
Path(freqai.dk.data_path / str(freqai.dk.model_filename + "_svm_model.joblib"))
.resolve()
.exists()
)
shutil.rmtree(Path(freqai.dk.full_path))
# FIXME: hits segfault
# @pytest.mark.skipif("arm" in platform.uname()[-1], reason="no ARM..")
# def test_train_model_in_series_Catboost(mocker, default_conf):
# freqaiconf = freqai_conf(copy.deepcopy(default_conf))
# freqaiconf.update({"timerange": "20180110-20180130"})
# freqaiconf.update({"freqaimodel": "CatboostPredictionModel"})
# strategy = get_patched_freqai_strategy(mocker, freqaiconf)
# exchange = get_patched_exchange(mocker, freqaiconf)
# strategy.dp = DataProvider(freqaiconf, exchange)
# strategy.freqai_info = freqaiconf.get("freqai", {})
# freqai = strategy.model.bridge
# freqai.live = True
# freqai.dk = FreqaiDataKitchen(freqaiconf, freqai.dd)
# timerange = TimeRange.parse_timerange("20180110-20180130")
# freqai.dk.load_all_pair_histories(timerange)
# freqai.dd.pair_dict = MagicMock()
# data_load_timerange = TimeRange.parse_timerange("20180110-20180130")
# new_timerange = TimeRange.parse_timerange("20180120-20180130")
# freqai.train_model_in_series(new_timerange, "ADA/BTC",
# strategy, freqai.dk, data_load_timerange)
# assert (
# Path(freqai.dk.data_path / str(freqai.dk.model_filename + "_model.joblib"))
# .resolve()
# .exists()
# )
# assert (
# Path(freqai.dk.data_path / str(freqai.dk.model_filename + "_metadata.json"))
# .resolve()
# .exists()
# )
# assert (
# Path(freqai.dk.data_path / str(freqai.dk.model_filename + "_trained_df.pkl"))
# .resolve()
# .exists()
# )
# assert (
# Path(freqai.dk.data_path / str(freqai.dk.model_filename + "_svm_model.joblib"))
# .resolve()
# .exists()
# )
# shutil.rmtree(Path(freqai.dk.full_path))
def test_start_backtesting(mocker, default_conf):
freqaiconf = freqai_conf(copy.deepcopy(default_conf))
freqaiconf.update({"timerange": "20180120-20180130"})
strategy = get_patched_freqai_strategy(mocker, freqaiconf)
exchange = get_patched_exchange(mocker, freqaiconf)
strategy.dp = DataProvider(freqaiconf, exchange)
strategy.freqai_info = freqaiconf.get("freqai", {})
freqai = strategy.model.bridge
freqai.live = False
freqai.dk = FreqaiDataKitchen(freqaiconf, freqai.dd)
timerange = TimeRange.parse_timerange("20180110-20180130")
freqai.dk.load_all_pair_histories(timerange)
sub_timerange = TimeRange.parse_timerange("20180110-20180130")
corr_df, base_df = freqai.dk.get_base_and_corr_dataframes(sub_timerange, "LTC/BTC")
df = freqai.dk.use_strategy_to_populate_indicators(strategy, corr_df, base_df, "LTC/BTC")
metadata = {"pair": "ADA/BTC"}
freqai.start_backtesting(df, metadata, freqai.dk)
model_folders = [x for x in freqai.dd.full_path.iterdir() if x.is_dir()]
assert len(model_folders) == 5
shutil.rmtree(Path(freqai.dk.full_path))
def test_start_backtesting_from_existing_folder(mocker, default_conf, caplog):
freqaiconf = freqai_conf(copy.deepcopy(default_conf))
freqaiconf.update({"timerange": "20180120-20180130"})
strategy = get_patched_freqai_strategy(mocker, freqaiconf)
exchange = get_patched_exchange(mocker, freqaiconf)
strategy.dp = DataProvider(freqaiconf, exchange)
strategy.freqai_info = freqaiconf.get("freqai", {})
freqai = strategy.model.bridge
freqai.live = False
freqai.dk = FreqaiDataKitchen(freqaiconf, freqai.dd)
timerange = TimeRange.parse_timerange("20180110-20180130")
freqai.dk.load_all_pair_histories(timerange)
sub_timerange = TimeRange.parse_timerange("20180110-20180130")
corr_df, base_df = freqai.dk.get_base_and_corr_dataframes(sub_timerange, "LTC/BTC")
df = freqai.dk.use_strategy_to_populate_indicators(strategy, corr_df, base_df, "LTC/BTC")
metadata = {"pair": "ADA/BTC"}
freqai.start_backtesting(df, metadata, freqai.dk)
model_folders = [x for x in freqai.dd.full_path.iterdir() if x.is_dir()]
assert len(model_folders) == 5
# without deleting the exiting folder structure, re-run
freqaiconf.update({"timerange": "20180120-20180130"})
strategy = get_patched_freqai_strategy(mocker, freqaiconf)
exchange = get_patched_exchange(mocker, freqaiconf)
strategy.dp = DataProvider(freqaiconf, exchange)
strategy.freqai_info = freqaiconf.get("freqai", {})
freqai = strategy.model.bridge
freqai.live = False
freqai.dk = FreqaiDataKitchen(freqaiconf, freqai.dd)
timerange = TimeRange.parse_timerange("20180110-20180130")
freqai.dk.load_all_pair_histories(timerange)
sub_timerange = TimeRange.parse_timerange("20180110-20180130")
corr_df, base_df = freqai.dk.get_base_and_corr_dataframes(sub_timerange, "LTC/BTC")
df = freqai.dk.use_strategy_to_populate_indicators(strategy, corr_df, base_df, "LTC/BTC")
freqai.start_backtesting(df, metadata, freqai.dk)
assert log_has_re(
"Found model at ",
caplog,
)
shutil.rmtree(Path(freqai.dk.full_path))

View File

@ -1403,7 +1403,8 @@ def test_api_strategies(botclient):
'StrategyTestV2',
'StrategyTestV3',
'StrategyTestV3Analysis',
'StrategyTestV3Futures'
'StrategyTestV3Futures',
'freqai_test_strat'
]}

View File

@ -0,0 +1,182 @@
import logging
from functools import reduce
import pandas as pd
import talib.abstract as ta
from pandas import DataFrame
from freqtrade.freqai.strategy_bridge import CustomModel
from freqtrade.strategy import DecimalParameter, IntParameter, merge_informative_pair
from freqtrade.strategy.interface import IStrategy
logger = logging.getLogger(__name__)
class freqai_test_strat(IStrategy):
"""
Example strategy showing how the user connects their own
IFreqaiModel to the strategy. Namely, the user uses:
self.model = CustomModel(self.config)
self.model.bridge.start(dataframe, metadata)
to make predictions on their data. populate_any_indicators() automatically
generates the variety of features indicated by the user in the
canonical freqtrade configuration file under config['freqai'].
"""
minimal_roi = {"0": 0.1, "240": -1}
plot_config = {
"main_plot": {},
"subplots": {
"prediction": {"prediction": {"color": "blue"}},
"target_roi": {
"target_roi": {"color": "brown"},
},
"do_predict": {
"do_predict": {"color": "brown"},
},
},
}
process_only_new_candles = True
stoploss = -0.05
use_exit_signal = True
startup_candle_count: int = 300
can_short = False
linear_roi_offset = DecimalParameter(
0.00, 0.02, default=0.005, space="sell", optimize=False, load=True
)
max_roi_time_long = IntParameter(0, 800, default=400, space="sell", optimize=False, load=True)
def informative_pairs(self):
whitelist_pairs = self.dp.current_whitelist()
corr_pairs = self.config["freqai"]["feature_parameters"]["include_corr_pairlist"]
informative_pairs = []
for tf in self.config["freqai"]["feature_parameters"]["include_timeframes"]:
for pair in whitelist_pairs:
informative_pairs.append((pair, tf))
for pair in corr_pairs:
if pair in whitelist_pairs:
continue # avoid duplication
informative_pairs.append((pair, tf))
return informative_pairs
def bot_start(self):
self.model = CustomModel(self.config)
def populate_any_indicators(
self, metadata, pair, df, tf, informative=None, coin="", set_generalized_indicators=False
):
"""
Function designed to automatically generate, name and merge features
from user indicated timeframes in the configuration file. User controls the indicators
passed to the training/prediction by prepending indicators with `'%-' + coin `
(see convention below). I.e. user should not prepend any supporting metrics
(e.g. bb_lowerband below) with % unless they explicitly want to pass that metric to the
model.
:params:
:pair: pair to be used as informative
:df: strategy dataframe which will receive merges from informatives
:tf: timeframe of the dataframe which will modify the feature names
:informative: the dataframe associated with the informative pair
:coin: the name of the coin which will modify the feature names.
"""
with self.model.bridge.lock:
if informative is None:
informative = self.dp.get_pair_dataframe(pair, tf)
# first loop is automatically duplicating indicators for time periods
for t in self.freqai_info["feature_parameters"]["indicator_periods_candles"]:
t = int(t)
informative[f"%-{coin}rsi-period_{t}"] = ta.RSI(informative, timeperiod=t)
informative[f"%-{coin}mfi-period_{t}"] = ta.MFI(informative, timeperiod=t)
informative[f"%-{coin}adx-period_{t}"] = ta.ADX(informative, window=t)
informative[f"%-{coin}pct-change"] = informative["close"].pct_change()
informative[f"%-{coin}raw_volume"] = informative["volume"]
informative[f"%-{coin}raw_price"] = informative["close"]
indicators = [col for col in informative if col.startswith("%")]
# This loop duplicates and shifts all indicators to add a sense of recency to data
for n in range(self.freqai_info["feature_parameters"]["include_shifted_candles"] + 1):
if n == 0:
continue
informative_shift = informative[indicators].shift(n)
informative_shift = informative_shift.add_suffix("_shift-" + str(n))
informative = pd.concat((informative, informative_shift), axis=1)
df = merge_informative_pair(df, informative, self.config["timeframe"], tf, ffill=True)
skip_columns = [
(s + "_" + tf) for s in ["date", "open", "high", "low", "close", "volume"]
]
df = df.drop(columns=skip_columns)
# Add generalized indicators here (because in live, it will call this
# function to populate indicators during training). Notice how we ensure not to
# add them multiple times
if set_generalized_indicators:
df["%-day_of_week"] = (df["date"].dt.dayofweek + 1) / 7
df["%-hour_of_day"] = (df["date"].dt.hour + 1) / 25
# user adds targets here by prepending them with &- (see convention below)
# If user wishes to use multiple targets, a multioutput prediction model
# needs to be used such as templates/CatboostPredictionMultiModel.py
df["&-s_close"] = (
df["close"]
.shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
.rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
.mean()
/ df["close"]
- 1
)
return df
def populate_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
self.freqai_info = self.config["freqai"]
# All indicators must be populated by populate_any_indicators() for live functionality
# to work correctly.
# the model will return 4 values, its prediction, an indication of whether or not the
# prediction should be accepted, the target mean/std values from the labels used during
# each training period.
dataframe = self.model.bridge.start(dataframe, metadata, self)
dataframe["target_roi"] = dataframe["&-s_close_mean"] + dataframe["&-s_close_std"] * 1.25
dataframe["sell_roi"] = dataframe["&-s_close_mean"] - dataframe["&-s_close_std"] * 1.25
return dataframe
def populate_entry_trend(self, df: DataFrame, metadata: dict) -> DataFrame:
enter_long_conditions = [df["do_predict"] == 1, df["&-s_close"] > df["target_roi"]]
if enter_long_conditions:
df.loc[
reduce(lambda x, y: x & y, enter_long_conditions), ["enter_long", "enter_tag"]
] = (1, "long")
enter_short_conditions = [df["do_predict"] == 1, df["&-s_close"] < df["sell_roi"]]
if enter_short_conditions:
df.loc[
reduce(lambda x, y: x & y, enter_short_conditions), ["enter_short", "enter_tag"]
] = (1, "short")
return df
def populate_exit_trend(self, df: DataFrame, metadata: dict) -> DataFrame:
exit_long_conditions = [df["do_predict"] == 1, df["&-s_close"] < df["sell_roi"] * 0.25]
if exit_long_conditions:
df.loc[reduce(lambda x, y: x & y, exit_long_conditions), "exit_long"] = 1
exit_short_conditions = [df["do_predict"] == 1, df["&-s_close"] > df["target_roi"] * 0.25]
if exit_short_conditions:
df.loc[reduce(lambda x, y: x & y, exit_short_conditions), "exit_short"] = 1
return df

View File

@ -34,7 +34,7 @@ def test_search_all_strategies_no_failed():
directory = Path(__file__).parent / "strats"
strategies = StrategyResolver.search_all_objects(directory, enum_failed=False)
assert isinstance(strategies, list)
assert len(strategies) == 7
assert len(strategies) == 8
assert isinstance(strategies[0], dict)
@ -42,10 +42,10 @@ def test_search_all_strategies_with_failed():
directory = Path(__file__).parent / "strats"
strategies = StrategyResolver.search_all_objects(directory, enum_failed=True)
assert isinstance(strategies, list)
assert len(strategies) == 8
assert len(strategies) == 9
# with enum_failed=True search_all_objects() shall find 2 good strategies
# and 1 which fails to load
assert len([x for x in strategies if x['class'] is not None]) == 7
assert len([x for x in strategies if x['class'] is not None]) == 8
assert len([x for x in strategies if x['class'] is None]) == 1