Merge branch 'develop' of github.com:lolongcovas/freqtrade into feat/freqai
This commit is contained in:
commit
ffe7792251
1
.gitignore
vendored
1
.gitignore
vendored
@ -15,6 +15,7 @@ user_data/notebooks/*
|
|||||||
freqtrade-plot.html
|
freqtrade-plot.html
|
||||||
freqtrade-profit-plot.html
|
freqtrade-profit-plot.html
|
||||||
freqtrade/rpc/api_server/ui/*
|
freqtrade/rpc/api_server/ui/*
|
||||||
|
build_helpers/ta-lib/*
|
||||||
|
|
||||||
# Macos related
|
# Macos related
|
||||||
.DS_Store
|
.DS_Store
|
||||||
|
@ -4,7 +4,7 @@ else
|
|||||||
INSTALL_LOC=${1}
|
INSTALL_LOC=${1}
|
||||||
fi
|
fi
|
||||||
echo "Installing to ${INSTALL_LOC}"
|
echo "Installing to ${INSTALL_LOC}"
|
||||||
if [ ! -f "${INSTALL_LOC}/lib/libta_lib.a" ]; then
|
if [ -n "$2" ] || [ ! -f "${INSTALL_LOC}/lib/libta_lib.a" ]; then
|
||||||
tar zxvf ta-lib-0.4.0-src.tar.gz
|
tar zxvf ta-lib-0.4.0-src.tar.gz
|
||||||
cd ta-lib \
|
cd ta-lib \
|
||||||
&& sed -i.bak "s|0.00000001|0.000000000000000001 |g" src/ta_func/ta_utility.h \
|
&& sed -i.bak "s|0.00000001|0.000000000000000001 |g" src/ta_func/ta_utility.h \
|
||||||
@ -17,11 +17,17 @@ if [ ! -f "${INSTALL_LOC}/lib/libta_lib.a" ]; then
|
|||||||
cd .. && rm -rf ./ta-lib/
|
cd .. && rm -rf ./ta-lib/
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
which sudo && sudo make install || make install
|
if [ -z "$2" ]; then
|
||||||
if [ -x "$(command -v apt-get)" ]; then
|
which sudo && sudo make install || make install
|
||||||
echo "Updating library path using ldconfig"
|
if [ -x "$(command -v apt-get)" ]; then
|
||||||
sudo ldconfig
|
echo "Updating library path using ldconfig"
|
||||||
|
sudo ldconfig
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# Don't install with sudo
|
||||||
|
make install
|
||||||
fi
|
fi
|
||||||
|
|
||||||
cd .. && rm -rf ./ta-lib/
|
cd .. && rm -rf ./ta-lib/
|
||||||
else
|
else
|
||||||
echo "TA-lib already installed, skipping installation"
|
echo "TA-lib already installed, skipping installation"
|
||||||
|
@ -5,6 +5,7 @@
|
|||||||
"tradable_balance_ratio": 0.99,
|
"tradable_balance_ratio": 0.99,
|
||||||
"fiat_display_currency": "USD",
|
"fiat_display_currency": "USD",
|
||||||
"amount_reserve_percent": 0.05,
|
"amount_reserve_percent": 0.05,
|
||||||
|
"available_capital": 1000,
|
||||||
"amend_last_stake_amount": false,
|
"amend_last_stake_amount": false,
|
||||||
"last_stake_amount_min_ratio": 0.5,
|
"last_stake_amount_min_ratio": 0.5,
|
||||||
"dry_run": true,
|
"dry_run": true,
|
||||||
|
Binary file not shown.
Before Width: | Height: | Size: 328 KiB After Width: | Height: | Size: 995 KiB |
File diff suppressed because one or more lines are too long
Before Width: | Height: | Size: 44 KiB After Width: | Height: | Size: 2.0 MiB |
388
docs/freqai.md
388
docs/freqai.md
@ -6,16 +6,16 @@ FreqAI is a module designed to automate a variety of tasks associated with train
|
|||||||
|
|
||||||
Among the the features included:
|
Among the the features included:
|
||||||
|
|
||||||
* **Self-adaptive retraining**: automatically retrain models during live deployments to self-adapt to the market in an unsupervised manner.
|
* **Self-adaptive retraining**: retrain models during live deployments to self-adapt to the market in an unsupervised manner.
|
||||||
* **Rapid feature engineering**: create large rich feature sets (10k+ features) based on simple user created strategies.
|
* **Rapid feature engineering**: create large rich feature sets (10k+ features) based on simple user created strategies.
|
||||||
* **High performance**: adaptive retraining occurs on separate thread (or on GPU if available) from inferencing and bot trade operations. Keep newest models and data in memory for rapid inferencing.
|
* **High performance**: adaptive retraining occurs on separate thread (or on GPU if available) from inferencing and bot trade operations. Keep newest models and data in memory for rapid inferencing.
|
||||||
* **Realistic backtesting**: emulate self-adaptive retraining with backtesting module that automates past retraining.
|
* **Realistic backtesting**: emulate self-adaptive retraining with backtesting module that automates past retraining.
|
||||||
* **Modifiable**: use the generalized and robust architecture for incorporating any machine learning library/method available in Python. Seven examples available.
|
* **Modifiable**: use the generalized and robust architecture for incorporating any machine learning library/method available in Python. Seven examples available.
|
||||||
* **Smart outlier removal**: remove outliers automatically from training and prediction sets using a variety of outlier detection techniques.
|
* **Smart outlier removal**: remove outliers from training and prediction sets using a variety of outlier detection techniques.
|
||||||
* **Crash resilience**: automatic model storage to disk to make reloading from a crash fast and easy (and purge obsolete files automatically for sustained dry/live runs).
|
* **Crash resilience**: model storage to disk to make reloading from a crash fast and easy (and purge obsolete files for sustained dry/live runs).
|
||||||
* **Automated data normalization**: automatically normalize the data automatically in a smart and statistically safe way.
|
* **Automated data normalization**: normalize the data in a smart and statistically safe way.
|
||||||
* **Automatic data download**: automatically compute the data download timerange and downloads data accordingly (in live deployments).
|
* **Automatic data download**: compute the data download timerange and update historic data (in live deployments).
|
||||||
* **Clean the incoming data of NaNs in a safe way before training and prediction.
|
* **Clean incoming data** safe NaN handling before training and prediction.
|
||||||
* **Dimensionality reduction**: reduce the size of the training data via Principal Component Analysis.
|
* **Dimensionality reduction**: reduce the size of the training data via Principal Component Analysis.
|
||||||
* **Deploy bot fleets**: set one bot to train models while a fleet of other bots inference into the models and handle trades.
|
* **Deploy bot fleets**: set one bot to train models while a fleet of other bots inference into the models and handle trades.
|
||||||
|
|
||||||
@ -37,7 +37,7 @@ The example strategy, example prediction model, and example config can all be fo
|
|||||||
|
|
||||||
The user provides FreqAI with a set of custom *base* indicators (created inside the strategy the same way
|
The user provides FreqAI with a set of custom *base* indicators (created inside the strategy the same way
|
||||||
a typical Freqtrade strategy is created) as well as target values which look into the future.
|
a typical Freqtrade strategy is created) as well as target values which look into the future.
|
||||||
FreqAI trains a model to predict the target value based on the input of custom indicators for each pair in the whitelist. These models are consistently retrained to adapt to market conditions. FreqAI offers the ability to both backtest strategies (emulating reality with periodic retraining) and deploy dry/live. In dry/live conditions, FreqAI can be set to constant retraining in a background thread in an effort to keep models as young as possible.
|
FreqAI trains a model to predict the target value based on the input of custom indicators for each pair in the whitelist. These models are consistently retrained to adapt to market conditions. FreqAI offers the ability to both backtest strategies (emulating reality with periodic retraining) and deploy dry/live. In dry/live conditions, FreqAI can be set to constant retraining in a background thread in an effort to keep models as young as possible.
|
||||||
|
|
||||||
An overview of the algorithm is shown here to help users understand the data processing pipeline and the model usage.
|
An overview of the algorithm is shown here to help users understand the data processing pipeline and the model usage.
|
||||||
|
|
||||||
@ -66,7 +66,7 @@ directly influence nodal weights within the model.
|
|||||||
|
|
||||||
## Install prerequisites
|
## Install prerequisites
|
||||||
|
|
||||||
The normal Freqtrade install process will ask the user if they wish to install `FreqAI` dependencies. The user should reply "yes" to this question if they wish to use FreqAI. If the user did not reply yes, they can manually install these dependencies after the install with:
|
The normal Freqtrade install process will ask the user if they wish to install FreqAI dependencies. The user should reply "yes" to this question if they wish to use FreqAI. If the user did not reply yes, they can manually install these dependencies after the install with:
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
pip install -r requirements-freqai.txt
|
pip install -r requirements-freqai.txt
|
||||||
@ -75,21 +75,21 @@ pip install -r requirements-freqai.txt
|
|||||||
!!! Note
|
!!! Note
|
||||||
Catboost will not be installed on arm devices (raspberry, Mac M1, ARM based VPS, ...), since Catboost does not provide wheels for this platform.
|
Catboost will not be installed on arm devices (raspberry, Mac M1, ARM based VPS, ...), since Catboost does not provide wheels for this platform.
|
||||||
|
|
||||||
## Configuring the bot
|
## Configuring FreqAI
|
||||||
|
|
||||||
### Parameter table
|
### Parameter table
|
||||||
|
|
||||||
The table below will list all configuration parameters available for `FreqAI`.
|
The table below will list all configuration parameters available for FreqAI.
|
||||||
|
|
||||||
Mandatory parameters are marked as **Required**, which means that they are required to be set in one of the possible ways.
|
Mandatory parameters are marked as **Required**, which means that they are required to be set in one of the possible ways.
|
||||||
|
|
||||||
| Parameter | Description |
|
| Parameter | Description |
|
||||||
|------------|-------------|
|
|------------|-------------|
|
||||||
| `freqai` | **Required.** The dictionary containing all the parameters for controlling FreqAI. <br> **Datatype:** dictionary.
|
| `freqai` | **Required.** The parent dictionary containing all the parameters below for controlling FreqAI. <br> **Datatype:** dictionary.
|
||||||
| `identifier` | **Required.** A unique name for the current model. This can be reused to reload pre-trained models/data. <br> **Datatype:** string.
|
| `identifier` | **Required.** A unique name for the current model. This can be reused to reload pre-trained models/data. <br> **Datatype:** string.
|
||||||
| `train_period_days` | **Required.** Number of days to use for the training data (width of the sliding window). <br> **Datatype:** positive integer.
|
| `train_period_days` | **Required.** Number of days to use for the training data (width of the sliding window). <br> **Datatype:** positive integer.
|
||||||
| `backtest_period_days` | **Required.** Number of days to inference into the trained model before sliding the window and retraining. This can be fractional days, but beware that the user provided `timerange` will be divided by this number to yield the number of trainings necessary to complete the backtest. <br> **Datatype:** Float.
|
| `backtest_period_days` | **Required.** Number of days to inference into the trained model before sliding the window and retraining. This can be fractional days, but beware that the user provided `timerange` will be divided by this number to yield the number of trainings necessary to complete the backtest. <br> **Datatype:** Float.
|
||||||
| `live_retrain_hours` | Frequency of retraining during dry/live runs. Default set to 0, which means it will retrain as often as possible. **Datatype:** Float > 0.
|
| `live_retrain_hours` | Frequency of retraining during dry/live runs. Default set to 0, which means it will retrain as often as possible. <br> **Datatype:** Float > 0.
|
||||||
| `follow_mode` | If true, this instance of FreqAI will look for models associated with `identifier` and load those for inferencing. A `follower` will **not** train new models. `False` by default. <br> **Datatype:** boolean.
|
| `follow_mode` | If true, this instance of FreqAI will look for models associated with `identifier` and load those for inferencing. A `follower` will **not** train new models. `False` by default. <br> **Datatype:** boolean.
|
||||||
| `startup_candles` | Number of candles needed for *backtesting only* to ensure all indicators are non NaNs at the start of the first train period. <br> **Datatype:** positive integer.
|
| `startup_candles` | Number of candles needed for *backtesting only* to ensure all indicators are non NaNs at the start of the first train period. <br> **Datatype:** positive integer.
|
||||||
| `fit_live_predictions_candles` | Computes target (label) statistics from prediction data, instead of from the training data set. Number of candles is the number of historical candles it uses to generate the statistics. <br> **Datatype:** positive integer.
|
| `fit_live_predictions_candles` | Computes target (label) statistics from prediction data, instead of from the training data set. Number of candles is the number of historical candles it uses to generate the statistics. <br> **Datatype:** positive integer.
|
||||||
@ -101,11 +101,11 @@ Mandatory parameters are marked as **Required**, which means that they are requi
|
|||||||
| `include_timeframes` | A list of timeframes that all indicators in `populate_any_indicators` will be created for and added as features to the base asset feature set. <br> **Datatype:** list of timeframes (strings).
|
| `include_timeframes` | A list of timeframes that all indicators in `populate_any_indicators` will be created for and added as features to the base asset feature set. <br> **Datatype:** list of timeframes (strings).
|
||||||
| `label_period_candles` | Number of candles into the future that the labels are created for. This is used in `populate_any_indicators`, refer to `templates/FreqaiExampleStrategy.py` for detailed usage. The user can create custom labels, making use of this parameter not. <br> **Datatype:** positive integer.
|
| `label_period_candles` | Number of candles into the future that the labels are created for. This is used in `populate_any_indicators`, refer to `templates/FreqaiExampleStrategy.py` for detailed usage. The user can create custom labels, making use of this parameter not. <br> **Datatype:** positive integer.
|
||||||
| `include_shifted_candles` | Parameter used to add a sense of temporal recency to flattened regression type input data. `include_shifted_candles` takes all features, duplicates and shifts them by the number indicated by user. <br> **Datatype:** positive integer.
|
| `include_shifted_candles` | Parameter used to add a sense of temporal recency to flattened regression type input data. `include_shifted_candles` takes all features, duplicates and shifts them by the number indicated by user. <br> **Datatype:** positive integer.
|
||||||
| `DI_threshold` | Activates the Dissimilarity Index for outlier detection when above 0, explained more [here](#removing-outliers-with-the-dissimilarity-index). <br> **Datatype:** positive float (typically below 1).
|
| `DI_threshold` | Activates the Dissimilarity Index for outlier detection when above 0, explained in detail [here](#removing-outliers-with-the-dissimilarity-index). <br> **Datatype:** positive float (typically below 1).
|
||||||
| `weight_factor` | Used to set weights for training data points according to their recency, see details and a figure of how it works [here](#controlling-the-model-learning-process). <br> **Datatype:** positive float (typically below 1).
|
| `weight_factor` | Used to set weights for training data points according to their recency, see details and a figure of how it works [here](#controlling-the-model-learning-process). <br> **Datatype:** positive float (typically below 1).
|
||||||
| `principal_component_analysis` | Ask FreqAI to automatically reduce the dimensionality of the data set using PCA. <br> **Datatype:** boolean.
|
| `principal_component_analysis` | Ask FreqAI to automatically reduce the dimensionality of the data set using PCA. <br> **Datatype:** boolean.
|
||||||
| `use_SVM_to_remove_outliers` | Ask FreqAI to train a support vector machine to detect and remove outliers from the training data set as well as from incoming data points. <br> **Datatype:** boolean.
|
| `use_SVM_to_remove_outliers` | Ask FreqAI to train a support vector machine to detect and remove outliers from the training data set as well as from incoming data points. <br> **Datatype:** boolean.
|
||||||
| `svm_params` | All parameters available in Sklearn's `SGDOneClassSVM()`. E.g. `nu` *Very* broadly, is the percentage of data points that should be considered outliers. `shuffle` is by default false to maintain reprodicibility. But these and all others can be added/changed in this dictionary. <br> **Datatype:** dictionary.
|
| `svm_params` | All parameters available in Sklearn's `SGDOneClassSVM()`. E.g. `nu` *Very* broadly, is the percentage of data points that should be considered outliers. `shuffle` is by default false to maintain reproducibility. But these and all others can be added/changed in this dictionary. <br> **Datatype:** dictionary.
|
||||||
| `stratify_training_data` | This value is used to indicate the stratification of the data. e.g. 2 would set every 2nd data point into a separate dataset to be pulled from during training/testing. <br> **Datatype:** positive integer.
|
| `stratify_training_data` | This value is used to indicate the stratification of the data. e.g. 2 would set every 2nd data point into a separate dataset to be pulled from during training/testing. <br> **Datatype:** positive integer.
|
||||||
| `indicator_max_period_candles` | The maximum *period* used in `populate_any_indicators()` for indicator creation. FreqAI uses this information in combination with the maximum timeframe to calculate how many data points it should download so that the first data point does not have a NaN <br> **Datatype:** positive integer.
|
| `indicator_max_period_candles` | The maximum *period* used in `populate_any_indicators()` for indicator creation. FreqAI uses this information in combination with the maximum timeframe to calculate how many data points it should download so that the first data point does not have a NaN <br> **Datatype:** positive integer.
|
||||||
| `indicator_periods_candles` | A list of integers used to duplicate all indicators according to a set of periods and add them to the feature set. <br> **Datatype:** list of positive integers.
|
| `indicator_periods_candles` | A list of integers used to duplicate all indicators according to a set of periods and add them to the feature set. <br> **Datatype:** list of positive integers.
|
||||||
@ -113,7 +113,7 @@ Mandatory parameters are marked as **Required**, which means that they are requi
|
|||||||
| | **Data split parameters**
|
| | **Data split parameters**
|
||||||
| `data_split_parameters` | Include any additional parameters available from Scikit-learn `test_train_split()`, which are shown [here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) <br> **Datatype:** dictionary.
|
| `data_split_parameters` | Include any additional parameters available from Scikit-learn `test_train_split()`, which are shown [here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) <br> **Datatype:** dictionary.
|
||||||
| `test_size` | Fraction of data that should be used for testing instead of training. <br> **Datatype:** positive float below 1.
|
| `test_size` | Fraction of data that should be used for testing instead of training. <br> **Datatype:** positive float below 1.
|
||||||
| `shuffle` | Shuffle the training data points during training. Typically for time-series forecasting, this is set to False. **Datatype:** boolean.
|
| `shuffle` | Shuffle the training data points during training. Typically for time-series forecasting, this is set to False. <br> **Datatype:** boolean.
|
||||||
| | **Model training parameters**
|
| | **Model training parameters**
|
||||||
| `model_training_parameters` | A flexible dictionary that includes all parameters available by the user selected library. For example, if the user uses `LightGBMRegressor`, then this dictionary can contain any parameter available by the `LightGBMRegressor` [here](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRegressor.html). If the user selects a different model, then this dictionary can contain any parameter from that different model. <br> **Datatype:** dictionary.
|
| `model_training_parameters` | A flexible dictionary that includes all parameters available by the user selected library. For example, if the user uses `LightGBMRegressor`, then this dictionary can contain any parameter available by the `LightGBMRegressor` [here](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRegressor.html). If the user selects a different model, then this dictionary can contain any parameter from that different model. <br> **Datatype:** dictionary.
|
||||||
| `n_estimators` | A common parameter among regressors which sets the number of boosted trees to fit <br> **Datatype:** integer.
|
| `n_estimators` | A common parameter among regressors which sets the number of boosted trees to fit <br> **Datatype:** integer.
|
||||||
@ -123,8 +123,8 @@ Mandatory parameters are marked as **Required**, which means that they are requi
|
|||||||
| `keras` | If your model makes use of keras (typical of Tensorflow based prediction models), activate this flag so that the model save/loading follows keras standards. Default value `false` <br> **Datatype:** boolean.
|
| `keras` | If your model makes use of keras (typical of Tensorflow based prediction models), activate this flag so that the model save/loading follows keras standards. Default value `false` <br> **Datatype:** boolean.
|
||||||
| `conv_width` | The width of a convolutional neural network input tensor. This replaces the need for `shift` by feeding in historical data points as the second dimension of the tensor. Technically, this parameter can also be used for regressors, but it only adds computational overhead and does not change the model training/prediction. Default value, 2 <br> **Datatype:** integer.
|
| `conv_width` | The width of a convolutional neural network input tensor. This replaces the need for `shift` by feeding in historical data points as the second dimension of the tensor. Technically, this parameter can also be used for regressors, but it only adds computational overhead and does not change the model training/prediction. Default value, 2 <br> **Datatype:** integer.
|
||||||
|
|
||||||
|
|
||||||
### Important FreqAI dataframe key patterns
|
### Important FreqAI dataframe key patterns
|
||||||
|
|
||||||
Here are the values the user can expect to include/use inside the typical strategy dataframe (`df[]`):
|
Here are the values the user can expect to include/use inside the typical strategy dataframe (`df[]`):
|
||||||
|
|
||||||
| DataFrame Key | Description |
|
| DataFrame Key | Description |
|
||||||
@ -133,55 +133,52 @@ Here are the values the user can expect to include/use inside the typical strate
|
|||||||
| `df['&*_std/mean']` | The standard deviation and mean values of the user defined labels during training (or live tracking with `fit_live_predictions_candles`). Commonly used to understand rarity of prediction (use the z-score as shown in `templates/FreqaiExampleStrategy.py` to evaluate how often a particular prediction was observed during training (or historically with `fit_live_predictions_candles`)<br> **Datatype:** float.
|
| `df['&*_std/mean']` | The standard deviation and mean values of the user defined labels during training (or live tracking with `fit_live_predictions_candles`). Commonly used to understand rarity of prediction (use the z-score as shown in `templates/FreqaiExampleStrategy.py` to evaluate how often a particular prediction was observed during training (or historically with `fit_live_predictions_candles`)<br> **Datatype:** float.
|
||||||
| `df['do_predict']` | An indication of an outlier, this return value is integer between -1 and 2 which lets the user understand if the prediction is trustworthy or not. `do_predict==1` means the prediction is trustworthy. If the [Dissimilarity Index](#removing-outliers-with-the-dissimilarity-index) is above the user defined threshold, it will subtract 1 from `do_predict`. If `use_SVM_to_remove_outliers()` is active, then the Support Vector Machine (SVM) may also detect outliers in training and prediction data. In this case, the SVM will also subtract one from `do_predict`. A particular case is when `do_predict == 2`, it means that the model has expired due to `expired_hours`. <br> **Datatype:** integer between -1 and 2.
|
| `df['do_predict']` | An indication of an outlier, this return value is integer between -1 and 2 which lets the user understand if the prediction is trustworthy or not. `do_predict==1` means the prediction is trustworthy. If the [Dissimilarity Index](#removing-outliers-with-the-dissimilarity-index) is above the user defined threshold, it will subtract 1 from `do_predict`. If `use_SVM_to_remove_outliers()` is active, then the Support Vector Machine (SVM) may also detect outliers in training and prediction data. In this case, the SVM will also subtract one from `do_predict`. A particular case is when `do_predict == 2`, it means that the model has expired due to `expired_hours`. <br> **Datatype:** integer between -1 and 2.
|
||||||
| `df['DI_values']` | The raw Dissimilarity Index values to give the user a sense of confidence in the prediction. Lower DI means the data point is closer to the trained parameter space. <br> **Datatype:** float.
|
| `df['DI_values']` | The raw Dissimilarity Index values to give the user a sense of confidence in the prediction. Lower DI means the data point is closer to the trained parameter space. <br> **Datatype:** float.
|
||||||
| `df['%*']` | Any dataframe column prepended with `%` in `populate_any_indicators()` is treated as a training feature inside FreqAI. For example, the user can include the rsi in the training feature set (similar to `templates/FreqaiExampleStrategy.py`) by setting `df['%-rsi']`. See more details on how this is done [here](#building-the-feature-set). Note: since the number of features prepended with `%` can multiply very quickly (10s of thousands of features is easily engineered using the multiplictative functionality described in the `feature_parameters` table.) these features are removed from the dataframe upon return from FreqAI. If the user wishes to keep a particular type of feature for plotting purposes, you can prepend it with `%%`. <br> **Datatype:** depends on the output of the model.
|
| `df['%*']` | Any dataframe column prepended with `%` in `populate_any_indicators()` is treated as a training feature inside FreqAI. For example, the user can include the rsi in the training feature set (similar to `templates/FreqaiExampleStrategy.py`) by setting `df['%-rsi']`. See more details on how this is done [here](#building-the-feature-set). <br>**Note**: since the number of features prepended with `%` can multiply very quickly (10s of thousands of features is easily engineered using the multiplictative functionality described in the `feature_parameters` table.) these features are removed from the dataframe upon return from FreqAI. If the user wishes to keep a particular type of feature for plotting purposes, you can prepend it with `%%`. <br> **Datatype:** depends on the output of the model.
|
||||||
|
|
||||||
### Example config file
|
### Example config file
|
||||||
|
|
||||||
The user interface is isolated to the typical config file. A typical FreqAI
|
The user interface is isolated to the typical config file. A typical FreqAI config setup could include:
|
||||||
config setup could include:
|
|
||||||
|
|
||||||
```json
|
```json
|
||||||
"freqai": {
|
"freqai": {
|
||||||
"startup_candles": 10000,
|
"startup_candles": 10000,
|
||||||
"purge_old_models": true,
|
"purge_old_models": true,
|
||||||
"train_period_days" : 30,
|
"train_period_days": 30,
|
||||||
"backtest_period_days" : 7,
|
"backtest_period_days": 7,
|
||||||
"identifier" : "unique-id",
|
"identifier" : "unique-id",
|
||||||
"feature_parameters" : {
|
"feature_parameters" : {
|
||||||
"include_timeframes" : ["5m","15m","4h"],
|
"include_timeframes": ["5m","15m","4h"],
|
||||||
"include_corr_pairlist": [
|
"include_corr_pairlist": [
|
||||||
"ETH/USD",
|
"ETH/USD",
|
||||||
"LINK/USD",
|
"LINK/USD",
|
||||||
"BNB/USD"
|
"BNB/USD"
|
||||||
],
|
],
|
||||||
"label_period_candles": 24,
|
"label_period_candles": 24,
|
||||||
"include_shifted_candles": 2,
|
"include_shifted_candles": 2,
|
||||||
"weight_factor": 0,
|
"weight_factor": 0,
|
||||||
"indicator_max_period_candles": 20,
|
"indicator_max_period_candles": 20,
|
||||||
"indicator_periods_candles": [10, 20]
|
"indicator_periods_candles": [10, 20]
|
||||||
},
|
},
|
||||||
"data_split_parameters" : {
|
"data_split_parameters" : {
|
||||||
"test_size": 0.25,
|
"test_size": 0.25,
|
||||||
"random_state": 42
|
"random_state": 42
|
||||||
},
|
},
|
||||||
"model_training_parameters" : {
|
"model_training_parameters" : {
|
||||||
"n_estimators": 100,
|
"n_estimators": 100,
|
||||||
"random_state": 42,
|
"random_state": 42,
|
||||||
"learning_rate": 0.02,
|
"learning_rate": 0.02,
|
||||||
"task_type": "CPU",
|
"task_type": "CPU",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Feature engineering
|
### Feature engineering
|
||||||
|
|
||||||
Features are added by the user inside the `populate_any_indicators()` method of the strategy
|
Features are added by the user inside the `populate_any_indicators()` method of the strategy
|
||||||
by prepending indicators with `%` and labels are added by prepending `&`.
|
by prepending indicators with `%` and labels are added by prepending `&`.
|
||||||
There are some important components/structures that the user *must* include when building their feature set.
|
There are some important components/structures that the user *must* include when building their feature set.
|
||||||
As shown below, `with self.freqai.lock:` must be used to ensure thread safety - especially when using third
|
|
||||||
party libraries for indicator construction such as TA-lib.
|
|
||||||
Another structure to consider is the location of the labels at the bottom of the example function (below `if set_generalized_indicators:`).
|
Another structure to consider is the location of the labels at the bottom of the example function (below `if set_generalized_indicators:`).
|
||||||
This is where the user will add single features and labels to their feature set to avoid duplication from
|
This is where the user will add single features and labels to their feature set to avoid duplication from
|
||||||
various configuration parameters which multiply the feature set such as `include_timeframes`.
|
various configuration parameters which multiply the feature set such as `include_timeframes`.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
@ -204,93 +201,56 @@ various configuration parameters which multiply the feature set such as `include
|
|||||||
|
|
||||||
coint = pair.split('/')[0]
|
coint = pair.split('/')[0]
|
||||||
|
|
||||||
with self.freqai.lock:
|
if informative is None:
|
||||||
if informative is None:
|
informative = self.dp.get_pair_dataframe(pair, tf)
|
||||||
informative = self.dp.get_pair_dataframe(pair, tf)
|
|
||||||
|
|
||||||
# first loop is automatically duplicating indicators for time periods
|
# first loop is automatically duplicating indicators for time periods
|
||||||
for t in self.freqai_info["feature_parameters"]["indicator_periods_candles"]:
|
for t in self.freqai_info["feature_parameters"]["indicator_periods_candles"]:
|
||||||
t = int(t)
|
t = int(t)
|
||||||
informative[f"%-{coin}rsi-period_{t}"] = ta.RSI(informative, timeperiod=t)
|
informative[f"%-{coin}rsi-period_{t}"] = ta.RSI(informative, timeperiod=t)
|
||||||
informative[f"%-{coin}mfi-period_{t}"] = ta.MFI(informative, timeperiod=t)
|
informative[f"%-{coin}mfi-period_{t}"] = ta.MFI(informative, timeperiod=t)
|
||||||
informative[f"%-{coin}adx-period_{t}"] = ta.ADX(informative, window=t)
|
informative[f"%-{coin}adx-period_{t}"] = ta.ADX(informative, window=t)
|
||||||
|
|
||||||
bollinger = qtpylib.bollinger_bands(
|
bollinger = qtpylib.bollinger_bands(
|
||||||
qtpylib.typical_price(informative), window=t, stds=2.2
|
qtpylib.typical_price(informative), window=t, stds=2.2
|
||||||
)
|
)
|
||||||
informative[f"{coin}bb_lowerband-period_{t}"] = bollinger["lower"]
|
informative[f"{coin}bb_lowerband-period_{t}"] = bollinger["lower"]
|
||||||
informative[f"{coin}bb_middleband-period_{t}"] = bollinger["mid"]
|
informative[f"{coin}bb_middleband-period_{t}"] = bollinger["mid"]
|
||||||
informative[f"{coin}bb_upperband-period_{t}"] = bollinger["upper"]
|
informative[f"{coin}bb_upperband-period_{t}"] = bollinger["upper"]
|
||||||
|
|
||||||
informative[f"%-{coin}bb_width-period_{t}"] = (
|
informative[f"%-{coin}bb_width-period_{t}"] = (
|
||||||
informative[f"{coin}bb_upperband-period_{t}"]
|
informative[f"{coin}bb_upperband-period_{t}"]
|
||||||
- informative[f"{coin}bb_lowerband-period_{t}"]
|
- informative[f"{coin}bb_lowerband-period_{t}"]
|
||||||
) / informative[f"{coin}bb_middleband-period_{t}"]
|
) / informative[f"{coin}bb_middleband-period_{t}"]
|
||||||
informative[f"%-{coin}close-bb_lower-period_{t}"] = (
|
informative[f"%-{coin}close-bb_lower-period_{t}"] = (
|
||||||
informative["close"] / informative[f"{coin}bb_lowerband-period_{t}"]
|
informative["close"] / informative[f"{coin}bb_lowerband-period_{t}"]
|
||||||
)
|
)
|
||||||
|
|
||||||
informative[f"%-{coin}relative_volume-period_{t}"] = (
|
informative[f"%-{coin}relative_volume-period_{t}"] = (
|
||||||
informative["volume"] / informative["volume"].rolling(t).mean()
|
informative["volume"] / informative["volume"].rolling(t).mean()
|
||||||
)
|
)
|
||||||
|
|
||||||
indicators = [col for col in informative if col.startswith("%")]
|
indicators = [col for col in informative if col.startswith("%")]
|
||||||
# This loop duplicates and shifts all indicators to add a sense of recency to data
|
# This loop duplicates and shifts all indicators to add a sense of recency to data
|
||||||
for n in range(self.freqai_info["feature_parameters"]["include_shifted_candles"] + 1):
|
for n in range(self.freqai_info["feature_parameters"]["include_shifted_candles"] + 1):
|
||||||
if n == 0:
|
if n == 0:
|
||||||
continue
|
continue
|
||||||
informative_shift = informative[indicators].shift(n)
|
informative_shift = informative[indicators].shift(n)
|
||||||
informative_shift = informative_shift.add_suffix("_shift-" + str(n))
|
informative_shift = informative_shift.add_suffix("_shift-" + str(n))
|
||||||
informative = pd.concat((informative, informative_shift), axis=1)
|
informative = pd.concat((informative, informative_shift), axis=1)
|
||||||
|
|
||||||
df = merge_informative_pair(df, informative, self.config["timeframe"], tf, ffill=True)
|
df = merge_informative_pair(df, informative, self.config["timeframe"], tf, ffill=True)
|
||||||
skip_columns = [
|
skip_columns = [
|
||||||
(s + "_" + tf) for s in ["date", "open", "high", "low", "close", "volume"]
|
(s + "_" + tf) for s in ["date", "open", "high", "low", "close", "volume"]
|
||||||
]
|
]
|
||||||
df = df.drop(columns=skip_columns)
|
df = df.drop(columns=skip_columns)
|
||||||
|
|
||||||
# Add generalized indicators here (because in live, it will call this
|
# Add generalized indicators here (because in live, it will call this
|
||||||
# function to populate indicators during training). Notice how we ensure not to
|
# function to populate indicators during training). Notice how we ensure not to
|
||||||
# add them multiple times
|
# add them multiple times
|
||||||
if set_generalized_indicators:
|
|
||||||
df["%-day_of_week"] = (df["date"].dt.dayofweek + 1) / 7
|
|
||||||
df["%-hour_of_day"] = (df["date"].dt.hour + 1) / 25
|
|
||||||
|
|
||||||
# user adds targets here by prepending them with &- (see convention below)
|
|
||||||
# If user wishes to use multiple targets, a multioutput prediction model
|
|
||||||
# needs to be used such as templates/CatboostPredictionMultiModel.py
|
|
||||||
df["&-s_close"] = (
|
|
||||||
df["close"]
|
|
||||||
.shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
|
||||||
.rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
|
||||||
.mean()
|
|
||||||
/ df["close"]
|
|
||||||
- 1
|
|
||||||
)
|
|
||||||
|
|
||||||
return df
|
|
||||||
```
|
|
||||||
|
|
||||||
The user of the present example does not wish to pass the `bb_lowerband` as a feature to the model,
|
|
||||||
and has therefore not prepended it with `%`. The user does, however, wish to pass `bb_width` to the
|
|
||||||
model for training/prediction and has therefore prepended it with `%`.
|
|
||||||
|
|
||||||
Note: features **must** be defined in `populate_any_indicators()`. Making features in `populate_indicators()`
|
|
||||||
will fail in live/dry mode. If the user wishes to add generalized features that are not associated with
|
|
||||||
a specific pair or timeframe, they should use the following structure inside `populate_any_indicators()`
|
|
||||||
(as exemplified in `freqtrade/templates/FreqaiExampleStrategy.py`:
|
|
||||||
|
|
||||||
```python
|
|
||||||
def populate_any_indicators(self, metadata, pair, df, tf, informative=None, coin="", set_generalized_indicators=False):
|
|
||||||
|
|
||||||
...
|
|
||||||
|
|
||||||
# Add generalized indicators here (because in live, it will call only this function to populate
|
|
||||||
# indicators for retraining). Notice how we ensure not to add them multiple times by associating
|
|
||||||
# these generalized indicators to the basepair/timeframe
|
|
||||||
if set_generalized_indicators:
|
if set_generalized_indicators:
|
||||||
df['%-day_of_week'] = (df["date"].dt.dayofweek + 1) / 7
|
df["%-day_of_week"] = (df["date"].dt.dayofweek + 1) / 7
|
||||||
df['%-hour_of_day'] = (df['date'].dt.hour + 1) / 25
|
df["%-hour_of_day"] = (df["date"].dt.hour + 1) / 25
|
||||||
|
|
||||||
# user adds targets here by prepending them with &- (see convention below)
|
# user adds targets here by prepending them with &- (see convention below)
|
||||||
# If user wishes to use multiple targets, a multioutput prediction model
|
# If user wishes to use multiple targets, a multioutput prediction model
|
||||||
@ -302,15 +262,18 @@ a specific pair or timeframe, they should use the following structure inside `po
|
|||||||
.mean()
|
.mean()
|
||||||
/ df["close"]
|
/ df["close"]
|
||||||
- 1
|
- 1
|
||||||
)
|
)
|
||||||
|
|
||||||
|
return df
|
||||||
```
|
```
|
||||||
|
|
||||||
(Please see the example script located in `freqtrade/templates/FreqaiExampleStrategy.py` for a full example of `populate_any_indicators()`)
|
The user of the present example does not wish to pass the `bb_lowerband` as a feature to the model,
|
||||||
|
and has therefore not prepended it with `%`. The user does, however, wish to pass `bb_width` to the
|
||||||
|
model for training/prediction and has therefore prepended it with `%`.
|
||||||
|
|
||||||
The `include_timeframes` from the example config above are the timeframes of each `populate_any_indicator()`
|
The `include_timeframes` from the example config above are the timeframes (`tf`) of each call to `populate_any_indicators()`
|
||||||
included metric for inclusion in the feature set. In the present case, the user is asking for the
|
included metric for inclusion in the feature set. In the present case, the user is asking for the
|
||||||
`5m`, `15m`, and `4h` timeframes of the `rsi`, `mfi`, `roc`, and `bb_width` to be included
|
`5m`, `15m`, and `4h` timeframes of the `rsi`, `mfi`, `roc`, and `bb_width` to be included in the feature set.
|
||||||
in the feature set.
|
|
||||||
|
|
||||||
In addition, the user can ask for each of these features to be included from
|
In addition, the user can ask for each of these features to be included from
|
||||||
informative pairs using the `include_corr_pairlist`. This means that the present feature
|
informative pairs using the `include_corr_pairlist`. This means that the present feature
|
||||||
@ -324,7 +287,40 @@ FreqAI to include the the past 2 candles for each of the features included in th
|
|||||||
In total, the number of features the present user has created is:
|
In total, the number of features the present user has created is:
|
||||||
|
|
||||||
length of `include_timeframes` * no. features in `populate_any_indicators()` * length of `include_corr_pairlist` * no. `include_shifted_candles` * length of `indicator_periods_candles`
|
length of `include_timeframes` * no. features in `populate_any_indicators()` * length of `include_corr_pairlist` * no. `include_shifted_candles` * length of `indicator_periods_candles`
|
||||||
_3 * 3 * 3 * 2 * 2 = 108_.
|
$3 * 3 * 3 * 2 * 2 = 108$.
|
||||||
|
|
||||||
|
!!! Note
|
||||||
|
Features **must** be defined in `populate_any_indicators()`. Making features in `populate_indicators()`
|
||||||
|
will fail in live/dry mode. If the user wishes to add generalized features that are not associated with
|
||||||
|
a specific pair or timeframe, they should use the following structure inside `populate_any_indicators()`
|
||||||
|
(as exemplified in `freqtrade/templates/FreqaiExampleStrategy.py`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def populate_any_indicators(self, metadata, pair, df, tf, informative=None, coin="", set_generalized_indicators=False):
|
||||||
|
|
||||||
|
...
|
||||||
|
|
||||||
|
# Add generalized indicators here (because in live, it will call only this function to populate
|
||||||
|
# indicators for retraining). Notice how we ensure not to add them multiple times by associating
|
||||||
|
# these generalized indicators to the basepair/timeframe
|
||||||
|
if set_generalized_indicators:
|
||||||
|
df['%-day_of_week'] = (df["date"].dt.dayofweek + 1) / 7
|
||||||
|
df['%-hour_of_day'] = (df['date'].dt.hour + 1) / 25
|
||||||
|
|
||||||
|
# user adds targets here by prepending them with &- (see convention below)
|
||||||
|
# If user wishes to use multiple targets, a multioutput prediction model
|
||||||
|
# needs to be used such as templates/CatboostPredictionMultiModel.py
|
||||||
|
df["&-s_close"] = (
|
||||||
|
df["close"]
|
||||||
|
.shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
|
.rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
|
.mean()
|
||||||
|
/ df["close"]
|
||||||
|
- 1
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
(Please see the example script located in `freqtrade/templates/FreqaiExampleStrategy.py` for a full example of `populate_any_indicators()`)
|
||||||
|
|
||||||
### Deciding the sliding training window and backtesting duration
|
### Deciding the sliding training window and backtesting duration
|
||||||
|
|
||||||
@ -370,12 +366,11 @@ Backtesting mode requires the user to have the data pre-downloaded (unlike dry/l
|
|||||||
If this command has never been executed with the existing config file, then it will train a new model
|
If this command has never been executed with the existing config file, then it will train a new model
|
||||||
for each pair, for each backtesting window within the bigger `--timerange`.
|
for each pair, for each backtesting window within the bigger `--timerange`.
|
||||||
|
|
||||||
---
|
|
||||||
!!! Note "Model reuse"
|
!!! Note "Model reuse"
|
||||||
Once the training is completed, the user can execute this again with the same config file and
|
Once the training is completed, the user can execute this again with the same config file and
|
||||||
FreqAI will find the trained models and load them instead of spending time training. This is useful
|
FreqAI will find the trained models and load them instead of spending time training. This is useful
|
||||||
if the user wants to tweak (or even hyperopt) buy and sell criteria inside the strategy. IF the user
|
if the user wants to tweak (or even hyperopt) buy and sell criteria inside the strategy. IF the user
|
||||||
*wants* to retrain a new model with the same config file, then he/she should simply change the `identifier`.
|
*wants* to retrain a new model with the same config file, then he/she should simply change the `identifier`.
|
||||||
This way, the user can return to using any model they wish by simply changing the `identifier`.
|
This way, the user can return to using any model they wish by simply changing the `identifier`.
|
||||||
|
|
||||||
---
|
---
|
||||||
@ -412,10 +407,74 @@ The FreqAI strategy requires the user to include the following lines of code in
|
|||||||
dataframe = self.freqai.start(dataframe, metadata, self)
|
dataframe = self.freqai.start(dataframe, metadata, self)
|
||||||
|
|
||||||
return dataframe
|
return dataframe
|
||||||
|
|
||||||
|
def populate_any_indicators(
|
||||||
|
self, pair, df, tf, informative=None, set_generalized_indicators=False
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Function designed to automatically generate, name and merge features
|
||||||
|
from user indicated timeframes in the configuration file. User controls the indicators
|
||||||
|
passed to the training/prediction by prepending indicators with `'%-' + coin `
|
||||||
|
(see convention below). I.e. user should not prepend any supporting metrics
|
||||||
|
(e.g. bb_lowerband below) with % unless they explicitly want to pass that metric to the
|
||||||
|
model.
|
||||||
|
:param pair: pair to be used as informative
|
||||||
|
:param df: strategy dataframe which will receive merges from informatives
|
||||||
|
:param tf: timeframe of the dataframe which will modify the feature names
|
||||||
|
:param informative: the dataframe associated with the informative pair
|
||||||
|
:param coin: the name of the coin which will modify the feature names.
|
||||||
|
"""
|
||||||
|
|
||||||
|
coin = pair.split('/')[0]
|
||||||
|
|
||||||
|
if informative is None:
|
||||||
|
informative = self.dp.get_pair_dataframe(pair, tf)
|
||||||
|
|
||||||
|
# first loop is automatically duplicating indicators for time periods
|
||||||
|
for t in self.freqai_info["feature_parameters"]["indicator_periods_candles"]:
|
||||||
|
t = int(t)
|
||||||
|
informative[f"%-{coin}rsi-period_{t}"] = ta.RSI(informative, timeperiod=t)
|
||||||
|
informative[f"%-{coin}mfi-period_{t}"] = ta.MFI(informative, timeperiod=t)
|
||||||
|
informative[f"%-{coin}adx-period_{t}"] = ta.ADX(informative, window=t)
|
||||||
|
|
||||||
|
indicators = [col for col in informative if col.startswith("%")]
|
||||||
|
# This loop duplicates and shifts all indicators to add a sense of recency to data
|
||||||
|
for n in range(self.freqai_info["feature_parameters"]["include_shifted_candles"] + 1):
|
||||||
|
if n == 0:
|
||||||
|
continue
|
||||||
|
informative_shift = informative[indicators].shift(n)
|
||||||
|
informative_shift = informative_shift.add_suffix("_shift-" + str(n))
|
||||||
|
informative = pd.concat((informative, informative_shift), axis=1)
|
||||||
|
|
||||||
|
df = merge_informative_pair(df, informative, self.config["timeframe"], tf, ffill=True)
|
||||||
|
skip_columns = [
|
||||||
|
(s + "_" + tf) for s in ["date", "open", "high", "low", "close", "volume"]
|
||||||
|
]
|
||||||
|
df = df.drop(columns=skip_columns)
|
||||||
|
|
||||||
|
# Add generalized indicators here (because in live, it will call this
|
||||||
|
# function to populate indicators during training). Notice how we ensure not to
|
||||||
|
# add them multiple times
|
||||||
|
if set_generalized_indicators:
|
||||||
|
|
||||||
|
# user adds targets here by prepending them with &- (see convention below)
|
||||||
|
# If user wishes to use multiple targets, a multioutput prediction model
|
||||||
|
# needs to be used such as templates/CatboostPredictionMultiModel.py
|
||||||
|
df["&-s_close"] = (
|
||||||
|
df["close"]
|
||||||
|
.shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
|
.rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
|
.mean()
|
||||||
|
/ df["close"]
|
||||||
|
- 1
|
||||||
|
)
|
||||||
|
|
||||||
|
return df
|
||||||
|
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
The user should also include `populate_any_indicators()` from `templates/FreqaiExampleStrategy.py` which builds
|
Notice how the `populate_any_indicators()` is where the user adds their own features and labels ([more information](#feature-engineering)). See a full example at `templates/FreqaiExampleStrategy.py`.
|
||||||
the feature set with a proper naming convention for the IFreqaiModel to use later.
|
|
||||||
|
|
||||||
### Setting classifier targets
|
### Setting classifier targets
|
||||||
|
|
||||||
@ -425,7 +484,6 @@ FreqAI includes a the `CatboostClassifier` via the flag `--freqaimodel CatboostC
|
|||||||
df['&s-up_or_down'] = np.where( df["close"].shift(-100) > df["close"], 'up', 'down')
|
df['&s-up_or_down'] = np.where( df["close"].shift(-100) > df["close"], 'up', 'down')
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
### Running the model live
|
### Running the model live
|
||||||
|
|
||||||
FreqAI can be run dry/live using the following command
|
FreqAI can be run dry/live using the following command
|
||||||
@ -434,8 +492,8 @@ FreqAI can be run dry/live using the following command
|
|||||||
freqtrade trade --strategy FreqaiExampleStrategy --config config_freqai.example.json --freqaimodel LightGBMRegressor
|
freqtrade trade --strategy FreqaiExampleStrategy --config config_freqai.example.json --freqaimodel LightGBMRegressor
|
||||||
```
|
```
|
||||||
|
|
||||||
By default, FreqAI will not find find any existing models and will start by training a new one
|
By default, FreqAI will not find any existing models and will start by training a new one
|
||||||
given the user configuration settings. Following training, it will use that model to make predictions on incoming candles until a new model is available. New models are typically generated as often as possible, with FreqAI managing an internal queue of the pairs to try and keep all models equally "young." FreqAI will always use the newest trained model to make predictions on incoming live data. If users do not want FreqAI to retrain new models as often as possible, they can set `live_retrain_hours` to tell FreqAI to wait at least that number of hours before retraining a new model. Additionally, users can set `expired_hours` to tell FreqAI to avoid making predictions on models aged over this number of hours.
|
given the user configuration settings. Following training, it will use that model to make predictions on incoming candles until a new model is available. New models are typically generated as often as possible, with FreqAI managing an internal queue of the pairs to try and keep all models equally "young." FreqAI will always use the newest trained model to make predictions on incoming live data. If users do not want FreqAI to retrain new models as often as possible, they can set `live_retrain_hours` to tell FreqAI to wait at least that number of hours before retraining a new model. Additionally, users can set `expired_hours` to tell FreqAI to avoid making predictions on models aged over this number of hours.
|
||||||
|
|
||||||
If the user wishes to start dry/live from a backtested saved model, the user only needs to reuse
|
If the user wishes to start dry/live from a backtested saved model, the user only needs to reuse
|
||||||
the same `identifier` parameter
|
the same `identifier` parameter
|
||||||
@ -449,7 +507,7 @@ the same `identifier` parameter
|
|||||||
|
|
||||||
In this case, although FreqAI will initiate with a
|
In this case, although FreqAI will initiate with a
|
||||||
pre-trained model, it will still check to see how much time has elapsed since the model was trained,
|
pre-trained model, it will still check to see how much time has elapsed since the model was trained,
|
||||||
and if a full `live_retrain_hours` has elapsed since the end of the loaded model, FreqAI will self retrain.
|
and if a full `live_retrain_hours` has elapsed since the end of the loaded model, FreqAI will self retrain.
|
||||||
|
|
||||||
## Data analysis techniques
|
## Data analysis techniques
|
||||||
|
|
||||||
@ -457,7 +515,7 @@ and if a full `live_retrain_hours` has elapsed since the end of the loaded model
|
|||||||
|
|
||||||
Model training parameters are unique to the ML library used by the user. FreqAI allows users to set any parameter for any library using the `model_training_parameters` dictionary in the user configuration file. The example configuration files show some of the example parameters associated with `Catboost` and `LightGBM`, but users can add any parameters available in those libraries.
|
Model training parameters are unique to the ML library used by the user. FreqAI allows users to set any parameter for any library using the `model_training_parameters` dictionary in the user configuration file. The example configuration files show some of the example parameters associated with `Catboost` and `LightGBM`, but users can add any parameters available in those libraries.
|
||||||
|
|
||||||
Data split parameters are defined in `data_split_parameters` which can be any parameters associated with `Sklearn`'s `train_test_split()` function. Meanwhile, FreqAI includes some additional parameters such `weight_factor` which allows the user to weight more recent data more strongly
|
Data split parameters are defined in `data_split_parameters` which can be any parameters associated with `Sklearn`'s `train_test_split()` function. FreqAI includes some additional parameters such `weight_factor` which allows the user to weight more recent data more strongly
|
||||||
than past data via an exponential function:
|
than past data via an exponential function:
|
||||||
|
|
||||||
$$ W_i = \exp(\frac{-i}{\alpha*n}) $$
|
$$ W_i = \exp(\frac{-i}{\alpha*n}) $$
|
||||||
@ -480,8 +538,8 @@ data point and all other training data points:
|
|||||||
$$ d_{ab} = \sqrt{\sum_{j=1}^p(X_{a,j}-X_{b,j})^2} $$
|
$$ d_{ab} = \sqrt{\sum_{j=1}^p(X_{a,j}-X_{b,j})^2} $$
|
||||||
|
|
||||||
where $d_{ab}$ is the distance between the normalized points $a$ and $b$. $p$
|
where $d_{ab}$ is the distance between the normalized points $a$ and $b$. $p$
|
||||||
is the number of features i.e. the length of the vector $X$. The
|
is the number of features i.e. the length of the vector $X$.
|
||||||
characteristic distance, $\overline{d}$ for a set of training data points is simply the mean
|
The characteristic distance, $\overline{d}$ for a set of training data points is simply the mean
|
||||||
of the average distances:
|
of the average distances:
|
||||||
|
|
||||||
$$ \overline{d} = \sum_{a=1}^n(\sum_{b=1}^n(d_{ab}/n)/n) $$
|
$$ \overline{d} = \sum_{a=1}^n(\sum_{b=1}^n(d_{ab}/n)/n) $$
|
||||||
@ -509,8 +567,7 @@ to low levels of certainty. Activating the Dissimilarity Index can be achieved w
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
The user can tweak the DI with `DI_threshold` to increase or decrease the extrapolation of the
|
The user can tweak the DI with `DI_threshold` to increase or decrease the extrapolation of the trained model.
|
||||||
trained model.
|
|
||||||
|
|
||||||
### Reducing data dimensionality with Principal Component Analysis
|
### Reducing data dimensionality with Principal Component Analysis
|
||||||
|
|
||||||
@ -544,7 +601,7 @@ FreqAI will train an SVM on the training data (or components if the user activat
|
|||||||
|
|
||||||
### Clustering the training data and removing outliers with DBSCAN
|
### Clustering the training data and removing outliers with DBSCAN
|
||||||
|
|
||||||
The user can tell FreqAI to use DBSCAN to cluster training data and remove outliers from the training data set. The user activates `use_DBSCAN_to_remove_outliers` to cluster training data for identification of outliers. Also used to detect incoming outliers for prediction data points.
|
The user can configure FreqAI to use DBSCAN to cluster training data and remove outliers from the training data set. The user activates `use_DBSCAN_to_remove_outliers` to cluster training data for identification of outliers. Also used to detect incoming outliers for prediction data points.
|
||||||
|
|
||||||
```json
|
```json
|
||||||
"freqai": {
|
"freqai": {
|
||||||
@ -590,7 +647,7 @@ The follower will load models created by the leader and inference them to obtain
|
|||||||
|
|
||||||
FreqAI stores new model files each time it retrains. These files become obsolete as new models
|
FreqAI stores new model files each time it retrains. These files become obsolete as new models
|
||||||
are trained and FreqAI adapts to the new market conditions. Users planning to leave FreqAI running
|
are trained and FreqAI adapts to the new market conditions. Users planning to leave FreqAI running
|
||||||
for extended periods of time with high frequency retraining should set `purge_old_models` in their
|
for extended periods of time with high frequency retraining should set `purge_old_models` in their
|
||||||
config:
|
config:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
@ -629,7 +686,7 @@ By default, FreqAI computes this based on training data and it assumes the label
|
|||||||
These are big assumptions that the user should consider when creating their labels. If the user wants to consider the population
|
These are big assumptions that the user should consider when creating their labels. If the user wants to consider the population
|
||||||
of *historical predictions* for creating the dynamic target instead of the trained labels, the user
|
of *historical predictions* for creating the dynamic target instead of the trained labels, the user
|
||||||
can do so by setting `fit_live_prediction_candles` to the number of historical prediction candles
|
can do so by setting `fit_live_prediction_candles` to the number of historical prediction candles
|
||||||
the user wishes to use to generate target statistics.
|
the user wishes to use to generate target statistics.
|
||||||
|
|
||||||
```json
|
```json
|
||||||
"freqai": {
|
"freqai": {
|
||||||
@ -638,17 +695,17 @@ the user wishes to use to generate target statistics.
|
|||||||
```
|
```
|
||||||
|
|
||||||
If the user sets this value, FreqAI will initially use the predictions from the training data set
|
If the user sets this value, FreqAI will initially use the predictions from the training data set
|
||||||
and then subsequently begin introducing real prediction data as it is generated. FreqAI will save
|
and then subsequently begin introducing real prediction data as it is generated. FreqAI will save
|
||||||
this historical data to be reloaded if the user stops and restarts with the same `identifier`.
|
this historical data to be reloaded if the user stops and restarts with the same `identifier`.
|
||||||
|
|
||||||
## Extra returns per train
|
## Extra returns per train
|
||||||
|
|
||||||
Users may find that there are some important metrics that they'd like to return to the strategy at the end of each retrain.
|
Users may find that there are some important metrics that they'd like to return to the strategy at the end of each retrain.
|
||||||
Users can include these metrics by assigining them to `dk.data['extra_returns_per_train']['my_new_value'] = XYZ` inside their custom prediction
|
Users can include these metrics by assigning them to `dk.data['extra_returns_per_train']['my_new_value'] = XYZ` inside their custom prediction
|
||||||
model class. FreqAI takes the `my_new_value` assigned in this dictionary and expands it to fit the return dataframe to the strategy.
|
model class. FreqAI takes the `my_new_value` assigned in this dictionary and expands it to fit the return dataframe to the strategy.
|
||||||
The user can then use the value in the strategy with `dataframe['my_new_value']`. An example of how this is already used in FreqAI is
|
The user can then use the value in the strategy with `dataframe['my_new_value']`. An example of how this is already used in FreqAI is
|
||||||
the `&*_mean` and `&*_std` values, which indicate the mean and standard deviation of that particular label during the most recent training.
|
the `&*_mean` and `&*_std` values, which indicate the mean and standard deviation of that particular label during the most recent training.
|
||||||
Another example is shown below if the user wants to use live metrics from the trade databse.
|
Another example is shown below if the user wants to use live metrics from the trade database.
|
||||||
|
|
||||||
The user needs to set the standard dictionary in the config so FreqAI can return proper dataframe shapes:
|
The user needs to set the standard dictionary in the config so FreqAI can return proper dataframe shapes:
|
||||||
|
|
||||||
@ -661,10 +718,9 @@ The user needs to set the standard dictionary in the config so FreqAI can return
|
|||||||
These values will likely be overridden by the user prediction model, but in the case where the user model has yet to set them, or needs
|
These values will likely be overridden by the user prediction model, but in the case where the user model has yet to set them, or needs
|
||||||
a default initial value - this is the value that will be returned.
|
a default initial value - this is the value that will be returned.
|
||||||
|
|
||||||
|
|
||||||
## Building an IFreqaiModel
|
## Building an IFreqaiModel
|
||||||
|
|
||||||
FreqAI has multiple example prediction model based libraries such as `Catboost` regression (`freqai/prediction_models/CatboostRegressor.py`) and `LightGBM` regression.
|
FreqAI has multiple example prediction model based libraries such as `Catboost` regression (`freqai/prediction_models/CatboostRegressor.py`) and `LightGBM` regression.
|
||||||
However, users can customize and create their own prediction models using the `IFreqaiModel` class.
|
However, users can customize and create their own prediction models using the `IFreqaiModel` class.
|
||||||
Users are encouraged to inherit `train()` and `predict()` to let them customize various aspects of their training procedures.
|
Users are encouraged to inherit `train()` and `predict()` to let them customize various aspects of their training procedures.
|
||||||
|
|
||||||
@ -690,8 +746,8 @@ This file structure is heavily controlled and read by the `FreqaiDataKitchen()`
|
|||||||
and should therefore not be modified.
|
and should therefore not be modified.
|
||||||
|
|
||||||
## Credits
|
## Credits
|
||||||
FreqAI was developed by a group of individuals who all contributed specific skillsets to the
|
|
||||||
project.
|
FreqAI was developed by a group of individuals who all contributed specific skillsets to the project.
|
||||||
|
|
||||||
Conception and software development:
|
Conception and software development:
|
||||||
Robert Caulk @robcaulk
|
Robert Caulk @robcaulk
|
||||||
|
@ -326,6 +326,16 @@ python3 -m pip install --upgrade pip
|
|||||||
python3 -m pip install -e .
|
python3 -m pip install -e .
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Patch conda libta-lib (Linux only)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Ensure that the environment is active!
|
||||||
|
conda activate freqtrade-conda
|
||||||
|
|
||||||
|
cd build_helpers
|
||||||
|
bash install_ta-lib.sh ${CONDA_PREFIX} nosudo
|
||||||
|
```
|
||||||
|
|
||||||
### Congratulations
|
### Congratulations
|
||||||
|
|
||||||
[You are ready](#you-are-ready), and run the bot
|
[You are ready](#you-are-ready), and run the bot
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
markdown==3.3.7
|
markdown==3.3.7
|
||||||
mkdocs==1.3.1
|
mkdocs==1.3.1
|
||||||
mkdocs-material==8.3.9
|
mkdocs-material==8.4.0
|
||||||
mdx_truly_sane_lists==1.3
|
mdx_truly_sane_lists==1.3
|
||||||
pymdown-extensions==9.5
|
pymdown-extensions==9.5
|
||||||
jinja2==3.1.2
|
jinja2==3.1.2
|
||||||
|
@ -187,7 +187,7 @@ official commands. You can ask at any moment for help with `/help`.
|
|||||||
| `/stats` | Shows Wins / losses by Exit reason as well as Avg. holding durations for buys and sells
|
| `/stats` | Shows Wins / losses by Exit reason as well as Avg. holding durations for buys and sells
|
||||||
| `/exits` | Shows Wins / losses by Exit reason as well as Avg. holding durations for buys and sells
|
| `/exits` | Shows Wins / losses by Exit reason as well as Avg. holding durations for buys and sells
|
||||||
| `/entries` | Shows Wins / losses by Exit reason as well as Avg. holding durations for buys and sells
|
| `/entries` | Shows Wins / losses by Exit reason as well as Avg. holding durations for buys and sells
|
||||||
| `/whitelist` | Show the current whitelist
|
| `/whitelist [sorted] [baseonly]` | Show the current whitelist. Optionally display in alphabetical order and/or with just the base currency of each pairing.
|
||||||
| `/blacklist [pair]` | Show the current blacklist, or adds a pair to the blacklist.
|
| `/blacklist [pair]` | Show the current blacklist, or adds a pair to the blacklist.
|
||||||
| `/edge` | Show validated pairs by Edge if it is enabled.
|
| `/edge` | Show validated pairs by Edge if it is enabled.
|
||||||
| `/help` | Show help message
|
| `/help` | Show help message
|
||||||
|
@ -9,6 +9,7 @@ dependencies:
|
|||||||
- pandas
|
- pandas
|
||||||
- pip
|
- pip
|
||||||
|
|
||||||
|
- py-find-1st
|
||||||
- aiohttp
|
- aiohttp
|
||||||
- SQLAlchemy
|
- SQLAlchemy
|
||||||
- python-telegram-bot
|
- python-telegram-bot
|
||||||
@ -64,7 +65,7 @@ dependencies:
|
|||||||
|
|
||||||
- pip:
|
- pip:
|
||||||
- pycoingecko
|
- pycoingecko
|
||||||
- py_find_1st
|
# - py_find_1st
|
||||||
- tables
|
- tables
|
||||||
- pytest-random-order
|
- pytest-random-order
|
||||||
- ccxt
|
- ccxt
|
||||||
|
@ -12,7 +12,7 @@ import pandas as pd
|
|||||||
import rapidjson
|
import rapidjson
|
||||||
from joblib import dump, load
|
from joblib import dump, load
|
||||||
from joblib.externals import cloudpickle
|
from joblib.externals import cloudpickle
|
||||||
from numpy.typing import ArrayLike, NDArray
|
from numpy.typing import NDArray
|
||||||
from pandas import DataFrame
|
from pandas import DataFrame
|
||||||
|
|
||||||
from freqtrade.configuration import TimeRange
|
from freqtrade.configuration import TimeRange
|
||||||
@ -38,8 +38,7 @@ class FreqaiDataDrawer:
|
|||||||
"""
|
"""
|
||||||
Class aimed at holding all pair models/info in memory for better inferencing/retrainig/saving
|
Class aimed at holding all pair models/info in memory for better inferencing/retrainig/saving
|
||||||
/loading to/from disk.
|
/loading to/from disk.
|
||||||
This object remains persistent throughout live/dry, unlike FreqaiDataKitchen, which is
|
This object remains persistent throughout live/dry.
|
||||||
reinstantiated for each coin.
|
|
||||||
|
|
||||||
Record of contribution:
|
Record of contribution:
|
||||||
FreqAI was developed by a group of individuals who all contributed specific skillsets to the
|
FreqAI was developed by a group of individuals who all contributed specific skillsets to the
|
||||||
@ -56,7 +55,7 @@ class FreqaiDataDrawer:
|
|||||||
|
|
||||||
Beta testing and bug reporting:
|
Beta testing and bug reporting:
|
||||||
@bloodhunter4rc, Salah Lamkadem @ikonx, @ken11o2, @longyu, @paranoidandy, @smidelis, @smarm
|
@bloodhunter4rc, Salah Lamkadem @ikonx, @ken11o2, @longyu, @paranoidandy, @smidelis, @smarm
|
||||||
Juha Nykänen @suikula, Wagner Costa @wagnercosta
|
Juha Nykänen @suikula, Wagner Costa @wagnercosta, Johan Vlugt @Jooopieeert
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, full_path: Path, config: dict, follow_mode: bool = False):
|
def __init__(self, full_path: Path, config: dict, follow_mode: bool = False):
|
||||||
@ -85,6 +84,8 @@ class FreqaiDataDrawer:
|
|||||||
self.load_historic_predictions_from_disk()
|
self.load_historic_predictions_from_disk()
|
||||||
self.training_queue: Dict[str, int] = {}
|
self.training_queue: Dict[str, int] = {}
|
||||||
self.history_lock = threading.Lock()
|
self.history_lock = threading.Lock()
|
||||||
|
self.save_lock = threading.Lock()
|
||||||
|
self.pair_dict_lock = threading.Lock()
|
||||||
self.old_DBSCAN_eps: Dict[str, float] = {}
|
self.old_DBSCAN_eps: Dict[str, float] = {}
|
||||||
self.empty_pair_dict: pair_info = {
|
self.empty_pair_dict: pair_info = {
|
||||||
"model_filename": "", "trained_timestamp": 0,
|
"model_filename": "", "trained_timestamp": 0,
|
||||||
@ -145,9 +146,10 @@ class FreqaiDataDrawer:
|
|||||||
"""
|
"""
|
||||||
Save data drawer full of all pair model metadata in present model folder.
|
Save data drawer full of all pair model metadata in present model folder.
|
||||||
"""
|
"""
|
||||||
with open(self.pair_dictionary_path, 'w') as fp:
|
with self.save_lock:
|
||||||
rapidjson.dump(self.pair_dict, fp, default=self.np_encoder,
|
with open(self.pair_dictionary_path, 'w') as fp:
|
||||||
number_mode=rapidjson.NM_NATIVE)
|
rapidjson.dump(self.pair_dict, fp, default=self.np_encoder,
|
||||||
|
number_mode=rapidjson.NM_NATIVE)
|
||||||
|
|
||||||
def save_follower_dict_to_disk(self):
|
def save_follower_dict_to_disk(self):
|
||||||
"""
|
"""
|
||||||
@ -227,90 +229,50 @@ class FreqaiDataDrawer:
|
|||||||
|
|
||||||
def pair_to_end_of_training_queue(self, pair: str) -> None:
|
def pair_to_end_of_training_queue(self, pair: str) -> None:
|
||||||
# march all pairs up in the queue
|
# march all pairs up in the queue
|
||||||
for p in self.pair_dict:
|
with self.pair_dict_lock:
|
||||||
self.pair_dict[p]["priority"] -= 1
|
for p in self.pair_dict:
|
||||||
# send pair to end of queue
|
self.pair_dict[p]["priority"] -= 1
|
||||||
self.pair_dict[pair]["priority"] = len(self.pair_dict)
|
# send pair to end of queue
|
||||||
|
self.pair_dict[pair]["priority"] = len(self.pair_dict)
|
||||||
|
|
||||||
def set_initial_return_values(self, pair: str, dk: FreqaiDataKitchen,
|
def set_initial_return_values(self, pair: str, pred_df: DataFrame) -> None:
|
||||||
pred_df: DataFrame, do_preds: ArrayLike) -> None:
|
|
||||||
"""
|
"""
|
||||||
Set the initial return values to a persistent dataframe. This avoids needing to repredict on
|
Set the initial return values to the historical predictions dataframe. This avoids needing
|
||||||
historical candles, and also stores historical predictions despite retrainings (so stored
|
to repredict on historical candles, and also stores historical predictions despite
|
||||||
predictions are true predictions, not just inferencing on trained data)
|
retrainings (so stored predictions are true predictions, not just inferencing on trained
|
||||||
|
data)
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# dynamic df returned to strategy and plotted in frequi
|
|
||||||
mrv_df = self.model_return_values[pair] = pd.DataFrame()
|
|
||||||
|
|
||||||
# if user reused `identifier` in config and has historical predictions collected, load them
|
|
||||||
# so that frequi remains uninterrupted after a crash
|
|
||||||
hist_df = self.historic_predictions
|
hist_df = self.historic_predictions
|
||||||
if pair in hist_df:
|
len_diff = len(hist_df[pair].index) - len(pred_df.index)
|
||||||
len_diff = len(hist_df[pair].index) - len(pred_df.index)
|
if len_diff < 0:
|
||||||
if len_diff < 0:
|
df_concat = pd.concat([pred_df.iloc[:abs(len_diff)], hist_df[pair]],
|
||||||
df_concat = pd.concat([pred_df.iloc[:abs(len_diff)], hist_df[pair]],
|
ignore_index=True, keys=hist_df[pair].keys())
|
||||||
ignore_index=True, keys=hist_df[pair].keys())
|
|
||||||
else:
|
|
||||||
df_concat = hist_df[pair].tail(len(pred_df.index)).reset_index(drop=True)
|
|
||||||
df_concat = df_concat.fillna(0)
|
|
||||||
self.model_return_values[pair] = df_concat
|
|
||||||
logger.info(f'Setting initial FreqUI plots from historical data for {pair}.')
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
for label in pred_df.columns:
|
df_concat = hist_df[pair].tail(len(pred_df.index)).reset_index(drop=True)
|
||||||
mrv_df[label] = pred_df[label]
|
df_concat = df_concat.fillna(0)
|
||||||
if mrv_df[label].dtype == object:
|
self.model_return_values[pair] = df_concat
|
||||||
continue
|
|
||||||
mrv_df[f"{label}_mean"] = dk.data["labels_mean"][label]
|
|
||||||
mrv_df[f"{label}_std"] = dk.data["labels_std"][label]
|
|
||||||
|
|
||||||
if self.freqai_info["feature_parameters"].get("DI_threshold", 0) > 0:
|
|
||||||
mrv_df["DI_values"] = dk.DI_values
|
|
||||||
|
|
||||||
mrv_df["do_predict"] = do_preds
|
|
||||||
|
|
||||||
if dk.data['extra_returns_per_train']:
|
|
||||||
rets = dk.data['extra_returns_per_train']
|
|
||||||
for return_str in rets:
|
|
||||||
mrv_df[return_str] = rets[return_str]
|
|
||||||
|
|
||||||
# for keras type models, the conv_window needs to be prepended so
|
|
||||||
# viewing is correct in frequi
|
|
||||||
if self.freqai_info.get('keras', False):
|
|
||||||
n_lost_points = self.freqai_info.get('conv_width', 2)
|
|
||||||
zeros_df = DataFrame(np.zeros((n_lost_points, len(mrv_df.columns))),
|
|
||||||
columns=mrv_df.columns)
|
|
||||||
self.model_return_values[pair] = pd.concat(
|
|
||||||
[zeros_df, mrv_df], axis=0, ignore_index=True)
|
|
||||||
|
|
||||||
def append_model_predictions(self, pair: str, predictions: DataFrame,
|
def append_model_predictions(self, pair: str, predictions: DataFrame,
|
||||||
do_preds: NDArray[np.int_],
|
do_preds: NDArray[np.int_],
|
||||||
dk: FreqaiDataKitchen, len_df: int) -> None:
|
dk: FreqaiDataKitchen, len_df: int) -> None:
|
||||||
|
"""
|
||||||
|
Append model predictions to historic predictions dataframe, then set the
|
||||||
|
strategy return dataframe to the tail of the historic predictions. The length of
|
||||||
|
the tail is equivalent to the length of the dataframe that entered FreqAI from
|
||||||
|
the strategy originally. Doing this allows FreqUI to always display the correct
|
||||||
|
historic predictions.
|
||||||
|
"""
|
||||||
|
|
||||||
# strat seems to feed us variable sized dataframes - and since we are trying to build our
|
index = self.historic_predictions[pair].index[-1:]
|
||||||
# own return array in the same shape, we need to figure out how the size has changed
|
columns = self.historic_predictions[pair].columns
|
||||||
# and adapt our stored/returned info accordingly.
|
|
||||||
|
|
||||||
length_difference = len(self.model_return_values[pair]) - len_df
|
nan_df = pd.DataFrame(np.nan, index=index, columns=columns)
|
||||||
i = 0
|
self.historic_predictions[pair] = pd.concat(
|
||||||
|
[self.historic_predictions[pair], nan_df], ignore_index=True, axis=0)
|
||||||
|
df = self.historic_predictions[pair]
|
||||||
|
|
||||||
if length_difference == 0:
|
# model outputs and associated statistics
|
||||||
i = 1
|
|
||||||
elif length_difference > 0:
|
|
||||||
i = length_difference + 1
|
|
||||||
|
|
||||||
df = self.model_return_values[pair] = self.model_return_values[pair].shift(-i)
|
|
||||||
|
|
||||||
if pair in self.historic_predictions:
|
|
||||||
hp_df = self.historic_predictions[pair]
|
|
||||||
# here are some pandas hula hoops to accommodate the possibility of a series
|
|
||||||
# or dataframe depending number of labels requested by user
|
|
||||||
nan_df = pd.DataFrame(np.nan, index=hp_df.index[-2:] + 2, columns=hp_df.columns)
|
|
||||||
hp_df = pd.concat([hp_df, nan_df], ignore_index=True, axis=0)
|
|
||||||
self.historic_predictions[pair] = hp_df[:-1]
|
|
||||||
|
|
||||||
# incase user adds additional "predictions" e.g. predict_proba output:
|
|
||||||
for label in predictions.columns:
|
for label in predictions.columns:
|
||||||
df[label].iloc[-1] = predictions[label].iloc[-1]
|
df[label].iloc[-1] = predictions[label].iloc[-1]
|
||||||
if df[label].dtype == object:
|
if df[label].dtype == object:
|
||||||
@ -318,26 +280,18 @@ class FreqaiDataDrawer:
|
|||||||
df[f"{label}_mean"].iloc[-1] = dk.data["labels_mean"][label]
|
df[f"{label}_mean"].iloc[-1] = dk.data["labels_mean"][label]
|
||||||
df[f"{label}_std"].iloc[-1] = dk.data["labels_std"][label]
|
df[f"{label}_std"].iloc[-1] = dk.data["labels_std"][label]
|
||||||
|
|
||||||
|
# outlier indicators
|
||||||
df["do_predict"].iloc[-1] = do_preds[-1]
|
df["do_predict"].iloc[-1] = do_preds[-1]
|
||||||
|
|
||||||
if self.freqai_info["feature_parameters"].get("DI_threshold", 0) > 0:
|
if self.freqai_info["feature_parameters"].get("DI_threshold", 0) > 0:
|
||||||
df["DI_values"].iloc[-1] = dk.DI_values[-1]
|
df["DI_values"].iloc[-1] = dk.DI_values[-1]
|
||||||
|
|
||||||
|
# extra values the user added within custom prediction model
|
||||||
if dk.data['extra_returns_per_train']:
|
if dk.data['extra_returns_per_train']:
|
||||||
rets = dk.data['extra_returns_per_train']
|
rets = dk.data['extra_returns_per_train']
|
||||||
for return_str in rets:
|
for return_str in rets:
|
||||||
df[return_str].iloc[-1] = rets[return_str]
|
df[return_str].iloc[-1] = rets[return_str]
|
||||||
|
|
||||||
# append the new predictions to persistent storage
|
self.model_return_values[pair] = df.tail(len_df).reset_index(drop=True)
|
||||||
if pair in self.historic_predictions:
|
|
||||||
for key in df.keys():
|
|
||||||
self.historic_predictions[pair][key].iloc[-1] = df[key].iloc[-1]
|
|
||||||
|
|
||||||
if length_difference < 0:
|
|
||||||
prepend_df = pd.DataFrame(
|
|
||||||
np.zeros((abs(length_difference) - 1, len(df.columns))), columns=df.columns
|
|
||||||
)
|
|
||||||
df = pd.concat([prepend_df, df], axis=0)
|
|
||||||
|
|
||||||
def attach_return_values_to_return_dataframe(
|
def attach_return_values_to_return_dataframe(
|
||||||
self, pair: str, dataframe: DataFrame) -> DataFrame:
|
self, pair: str, dataframe: DataFrame) -> DataFrame:
|
||||||
@ -358,10 +312,7 @@ class FreqaiDataDrawer:
|
|||||||
|
|
||||||
dk.find_features(dataframe)
|
dk.find_features(dataframe)
|
||||||
|
|
||||||
if self.freqai_info.get('predict_proba', []):
|
full_labels = dk.label_list + dk.unique_class_list
|
||||||
full_labels = dk.label_list + self.freqai_info['predict_proba']
|
|
||||||
else:
|
|
||||||
full_labels = dk.label_list
|
|
||||||
|
|
||||||
for label in full_labels:
|
for label in full_labels:
|
||||||
dataframe[label] = 0
|
dataframe[label] = 0
|
||||||
@ -575,7 +526,7 @@ class FreqaiDataDrawer:
|
|||||||
history_data[pair][tf] = pd.concat(
|
history_data[pair][tf] = pd.concat(
|
||||||
[
|
[
|
||||||
history_data[pair][tf],
|
history_data[pair][tf],
|
||||||
strategy.dp.get_pair_dataframe(pair, tf).iloc[index:],
|
df_dp.iloc[index:],
|
||||||
],
|
],
|
||||||
ignore_index=True,
|
ignore_index=True,
|
||||||
axis=0,
|
axis=0,
|
||||||
|
@ -34,6 +34,9 @@ class FreqaiDataKitchen:
|
|||||||
Class designed to analyze data for a single pair. Employed by the IFreqaiModel class.
|
Class designed to analyze data for a single pair. Employed by the IFreqaiModel class.
|
||||||
Functionalities include holding, saving, loading, and analyzing the data.
|
Functionalities include holding, saving, loading, and analyzing the data.
|
||||||
|
|
||||||
|
This object is not persistent, it is reinstantiated for each coin, each time the coin
|
||||||
|
model needs to be inferenced or trained.
|
||||||
|
|
||||||
Record of contribution:
|
Record of contribution:
|
||||||
FreqAI was developed by a group of individuals who all contributed specific skillsets to the
|
FreqAI was developed by a group of individuals who all contributed specific skillsets to the
|
||||||
project.
|
project.
|
||||||
@ -49,7 +52,7 @@ class FreqaiDataKitchen:
|
|||||||
|
|
||||||
Beta testing and bug reporting:
|
Beta testing and bug reporting:
|
||||||
@bloodhunter4rc, Salah Lamkadem @ikonx, @ken11o2, @longyu, @paranoidandy, @smidelis, @smarm
|
@bloodhunter4rc, Salah Lamkadem @ikonx, @ken11o2, @longyu, @paranoidandy, @smidelis, @smarm
|
||||||
Juha Nykänen @suikula, Wagner Costa @wagnercosta
|
Juha Nykänen @suikula, Wagner Costa @wagnercosta, Johan Vlugt @Jooopieeert
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
@ -70,6 +73,7 @@ class FreqaiDataKitchen:
|
|||||||
self.model_filename: str = ""
|
self.model_filename: str = ""
|
||||||
self.live = live
|
self.live = live
|
||||||
self.pair = pair
|
self.pair = pair
|
||||||
|
|
||||||
self.svm_model: linear_model.SGDOneClassSVM = None
|
self.svm_model: linear_model.SGDOneClassSVM = None
|
||||||
self.keras: bool = self.freqai_config.get("keras", False)
|
self.keras: bool = self.freqai_config.get("keras", False)
|
||||||
self.set_all_pairs()
|
self.set_all_pairs()
|
||||||
@ -90,6 +94,8 @@ class FreqaiDataKitchen:
|
|||||||
self.data['extra_returns_per_train'] = self.freqai_config.get('extra_returns_per_train', {})
|
self.data['extra_returns_per_train'] = self.freqai_config.get('extra_returns_per_train', {})
|
||||||
self.thread_count = self.freqai_config.get("data_kitchen_thread_count", -1)
|
self.thread_count = self.freqai_config.get("data_kitchen_thread_count", -1)
|
||||||
self.train_dates: DataFrame = pd.DataFrame()
|
self.train_dates: DataFrame = pd.DataFrame()
|
||||||
|
self.unique_classes: Dict[str, list] = {}
|
||||||
|
self.unique_class_list: list = []
|
||||||
|
|
||||||
def set_paths(
|
def set_paths(
|
||||||
self,
|
self,
|
||||||
@ -337,7 +343,7 @@ class FreqaiDataKitchen:
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
for label in df.columns:
|
for label in df.columns:
|
||||||
if df[label].dtype == object:
|
if df[label].dtype == object or label in self.unique_class_list:
|
||||||
continue
|
continue
|
||||||
df[label] = (
|
df[label] = (
|
||||||
(df[label] + 1)
|
(df[label] + 1)
|
||||||
@ -977,6 +983,8 @@ class FreqaiDataKitchen:
|
|||||||
informative=corr_dataframes[i][tf]
|
informative=corr_dataframes[i][tf]
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.get_unique_classes_from_labels(dataframe)
|
||||||
|
|
||||||
return dataframe
|
return dataframe
|
||||||
|
|
||||||
def fit_labels(self) -> None:
|
def fit_labels(self) -> None:
|
||||||
@ -992,6 +1000,10 @@ class FreqaiDataKitchen:
|
|||||||
f = spy.stats.norm.fit(self.data_dictionary["train_labels"][label])
|
f = spy.stats.norm.fit(self.data_dictionary["train_labels"][label])
|
||||||
self.data["labels_mean"][label], self.data["labels_std"][label] = f[0], f[1]
|
self.data["labels_mean"][label], self.data["labels_std"][label] = f[0], f[1]
|
||||||
|
|
||||||
|
# incase targets are classifications
|
||||||
|
for label in self.unique_class_list:
|
||||||
|
self.data["labels_mean"][label], self.data["labels_std"][label] = 0, 0
|
||||||
|
|
||||||
return
|
return
|
||||||
|
|
||||||
def remove_features_from_df(self, dataframe: DataFrame) -> DataFrame:
|
def remove_features_from_df(self, dataframe: DataFrame) -> DataFrame:
|
||||||
@ -1003,3 +1015,15 @@ class FreqaiDataKitchen:
|
|||||||
col for col in dataframe.columns if not col.startswith("%") or col.startswith("%%")
|
col for col in dataframe.columns if not col.startswith("%") or col.startswith("%%")
|
||||||
]
|
]
|
||||||
return dataframe[to_keep]
|
return dataframe[to_keep]
|
||||||
|
|
||||||
|
def get_unique_classes_from_labels(self, dataframe: DataFrame) -> None:
|
||||||
|
|
||||||
|
self.find_features(dataframe)
|
||||||
|
|
||||||
|
for key in self.label_list:
|
||||||
|
if dataframe[key].dtype == object:
|
||||||
|
self.unique_classes[key] = dataframe[key].dropna().unique()
|
||||||
|
|
||||||
|
if self.unique_classes:
|
||||||
|
for label in self.unique_classes:
|
||||||
|
self.unique_class_list += list(self.unique_classes[label])
|
||||||
|
@ -6,6 +6,7 @@ import threading
|
|||||||
import time
|
import time
|
||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
from threading import Lock
|
||||||
from typing import Any, Dict, Tuple
|
from typing import Any, Dict, Tuple
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
@ -16,6 +17,7 @@ from pandas import DataFrame
|
|||||||
from freqtrade.configuration import TimeRange
|
from freqtrade.configuration import TimeRange
|
||||||
from freqtrade.enums import RunMode
|
from freqtrade.enums import RunMode
|
||||||
from freqtrade.exceptions import OperationalException
|
from freqtrade.exceptions import OperationalException
|
||||||
|
from freqtrade.exchange import timeframe_to_seconds
|
||||||
from freqtrade.freqai.data_drawer import FreqaiDataDrawer
|
from freqtrade.freqai.data_drawer import FreqaiDataDrawer
|
||||||
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
|
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
|
||||||
from freqtrade.strategy.interface import IStrategy
|
from freqtrade.strategy.interface import IStrategy
|
||||||
@ -52,7 +54,7 @@ class IFreqaiModel(ABC):
|
|||||||
|
|
||||||
Beta testing and bug reporting:
|
Beta testing and bug reporting:
|
||||||
@bloodhunter4rc, Salah Lamkadem @ikonx, @ken11o2, @longyu, @paranoidandy, @smidelis, @smarm
|
@bloodhunter4rc, Salah Lamkadem @ikonx, @ken11o2, @longyu, @paranoidandy, @smidelis, @smarm
|
||||||
Juha Nykänen @suikula, Wagner Costa @wagnercosta
|
Juha Nykänen @suikula, Wagner Costa @wagnercosta, Johan Vlugt @Jooopieeert
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, config: Dict[str, Any]) -> None:
|
def __init__(self, config: Dict[str, Any]) -> None:
|
||||||
@ -70,7 +72,6 @@ class IFreqaiModel(ABC):
|
|||||||
self.set_full_path()
|
self.set_full_path()
|
||||||
self.follow_mode: bool = self.freqai_info.get("follow_mode", False)
|
self.follow_mode: bool = self.freqai_info.get("follow_mode", False)
|
||||||
self.dd = FreqaiDataDrawer(Path(self.full_path), self.config, self.follow_mode)
|
self.dd = FreqaiDataDrawer(Path(self.full_path), self.config, self.follow_mode)
|
||||||
self.lock = threading.Lock()
|
|
||||||
self.identifier: str = self.freqai_info.get("identifier", "no_id_provided")
|
self.identifier: str = self.freqai_info.get("identifier", "no_id_provided")
|
||||||
self.scanning = False
|
self.scanning = False
|
||||||
self.keras: bool = self.freqai_info.get("keras", False)
|
self.keras: bool = self.freqai_info.get("keras", False)
|
||||||
@ -82,6 +83,10 @@ class IFreqaiModel(ABC):
|
|||||||
self.total_pairs = len(self.config.get("exchange", {}).get("pair_whitelist"))
|
self.total_pairs = len(self.config.get("exchange", {}).get("pair_whitelist"))
|
||||||
self.last_trade_database_summary: DataFrame = {}
|
self.last_trade_database_summary: DataFrame = {}
|
||||||
self.current_trade_database_summary: DataFrame = {}
|
self.current_trade_database_summary: DataFrame = {}
|
||||||
|
self.analysis_lock = Lock()
|
||||||
|
self.inference_time: float = 0
|
||||||
|
self.begin_time: float = 0
|
||||||
|
self.base_tf_seconds = timeframe_to_seconds(self.config['timeframe'])
|
||||||
|
|
||||||
def assert_config(self, config: Dict[str, Any]) -> None:
|
def assert_config(self, config: Dict[str, Any]) -> None:
|
||||||
|
|
||||||
@ -104,6 +109,7 @@ class IFreqaiModel(ABC):
|
|||||||
self.dd.set_pair_dict_info(metadata)
|
self.dd.set_pair_dict_info(metadata)
|
||||||
|
|
||||||
if self.live:
|
if self.live:
|
||||||
|
self.inference_timer('start')
|
||||||
self.dk = FreqaiDataKitchen(self.config, self.live, metadata["pair"])
|
self.dk = FreqaiDataKitchen(self.config, self.live, metadata["pair"])
|
||||||
dk = self.start_live(dataframe, metadata, strategy, self.dk)
|
dk = self.start_live(dataframe, metadata, strategy, self.dk)
|
||||||
|
|
||||||
@ -115,14 +121,16 @@ class IFreqaiModel(ABC):
|
|||||||
elif not self.follow_mode:
|
elif not self.follow_mode:
|
||||||
self.dk = FreqaiDataKitchen(self.config, self.live, metadata["pair"])
|
self.dk = FreqaiDataKitchen(self.config, self.live, metadata["pair"])
|
||||||
logger.info(f"Training {len(self.dk.training_timeranges)} timeranges")
|
logger.info(f"Training {len(self.dk.training_timeranges)} timeranges")
|
||||||
|
with self.analysis_lock:
|
||||||
dataframe = self.dk.use_strategy_to_populate_indicators(
|
dataframe = self.dk.use_strategy_to_populate_indicators(
|
||||||
strategy, prediction_dataframe=dataframe, pair=metadata["pair"]
|
strategy, prediction_dataframe=dataframe, pair=metadata["pair"]
|
||||||
)
|
)
|
||||||
dk = self.start_backtesting(dataframe, metadata, self.dk)
|
dk = self.start_backtesting(dataframe, metadata, self.dk)
|
||||||
|
|
||||||
dataframe = dk.remove_features_from_df(dk.return_dataframe)
|
dataframe = dk.remove_features_from_df(dk.return_dataframe)
|
||||||
del dk
|
del dk
|
||||||
|
if self.live:
|
||||||
|
self.inference_timer('stop')
|
||||||
return dataframe
|
return dataframe
|
||||||
|
|
||||||
@threaded
|
@threaded
|
||||||
@ -155,6 +163,8 @@ class IFreqaiModel(ABC):
|
|||||||
new_trained_timerange, pair, strategy, dk, data_load_timerange
|
new_trained_timerange, pair, strategy, dk, data_load_timerange
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.dd.save_historic_predictions_to_disk()
|
||||||
|
|
||||||
def start_backtesting(
|
def start_backtesting(
|
||||||
self, dataframe: DataFrame, metadata: dict, dk: FreqaiDataKitchen
|
self, dataframe: DataFrame, metadata: dict, dk: FreqaiDataKitchen
|
||||||
) -> FreqaiDataKitchen:
|
) -> FreqaiDataKitchen:
|
||||||
@ -290,9 +300,10 @@ class IFreqaiModel(ABC):
|
|||||||
# load the model and associated data into the data kitchen
|
# load the model and associated data into the data kitchen
|
||||||
self.model = self.dd.load_data(metadata["pair"], dk)
|
self.model = self.dd.load_data(metadata["pair"], dk)
|
||||||
|
|
||||||
dataframe = self.dk.use_strategy_to_populate_indicators(
|
with self.analysis_lock:
|
||||||
strategy, prediction_dataframe=dataframe, pair=metadata["pair"]
|
dataframe = self.dk.use_strategy_to_populate_indicators(
|
||||||
)
|
strategy, prediction_dataframe=dataframe, pair=metadata["pair"]
|
||||||
|
)
|
||||||
|
|
||||||
if not self.model:
|
if not self.model:
|
||||||
logger.warning(
|
logger.warning(
|
||||||
@ -319,7 +330,10 @@ class IFreqaiModel(ABC):
|
|||||||
# first predictions are made on entire historical candle set coming from strategy. This
|
# first predictions are made on entire historical candle set coming from strategy. This
|
||||||
# allows FreqUI to show full return values.
|
# allows FreqUI to show full return values.
|
||||||
pred_df, do_preds = self.predict(dataframe, dk)
|
pred_df, do_preds = self.predict(dataframe, dk)
|
||||||
self.dd.set_initial_return_values(pair, dk, pred_df, do_preds)
|
if pair not in self.dd.historic_predictions:
|
||||||
|
self.set_initial_historic_predictions(pred_df, dk, pair)
|
||||||
|
self.dd.set_initial_return_values(pair, pred_df)
|
||||||
|
|
||||||
dk.return_dataframe = self.dd.attach_return_values_to_return_dataframe(pair, dataframe)
|
dk.return_dataframe = self.dd.attach_return_values_to_return_dataframe(pair, dataframe)
|
||||||
return
|
return
|
||||||
elif self.dk.check_if_model_expired(trained_timestamp):
|
elif self.dk.check_if_model_expired(trained_timestamp):
|
||||||
@ -336,6 +350,8 @@ class IFreqaiModel(ABC):
|
|||||||
# historical accuracy reasons.
|
# historical accuracy reasons.
|
||||||
pred_df, do_preds = self.predict(dataframe.iloc[-self.CONV_WIDTH:], dk, first=False)
|
pred_df, do_preds = self.predict(dataframe.iloc[-self.CONV_WIDTH:], dk, first=False)
|
||||||
|
|
||||||
|
if self.freqai_info.get('fit_live_predictions_candles', 0) and self.live:
|
||||||
|
self.fit_live_predictions(dk, pair)
|
||||||
self.dd.append_model_predictions(pair, pred_df, do_preds, dk, len(dataframe))
|
self.dd.append_model_predictions(pair, pred_df, do_preds, dk, len(dataframe))
|
||||||
dk.return_dataframe = self.dd.attach_return_values_to_return_dataframe(pair, dataframe)
|
dk.return_dataframe = self.dd.attach_return_values_to_return_dataframe(pair, dataframe)
|
||||||
|
|
||||||
@ -480,9 +496,10 @@ class IFreqaiModel(ABC):
|
|||||||
data_load_timerange, pair, dk
|
data_load_timerange, pair, dk
|
||||||
)
|
)
|
||||||
|
|
||||||
unfiltered_dataframe = dk.use_strategy_to_populate_indicators(
|
with self.analysis_lock:
|
||||||
strategy, corr_dataframes, base_dataframes, pair
|
unfiltered_dataframe = dk.use_strategy_to_populate_indicators(
|
||||||
)
|
strategy, corr_dataframes, base_dataframes, pair
|
||||||
|
)
|
||||||
|
|
||||||
unfiltered_dataframe = dk.slice_dataframe(new_trained_timerange, unfiltered_dataframe)
|
unfiltered_dataframe = dk.slice_dataframe(new_trained_timerange, unfiltered_dataframe)
|
||||||
|
|
||||||
@ -495,15 +512,14 @@ class IFreqaiModel(ABC):
|
|||||||
dk.set_new_model_names(pair, new_trained_timerange)
|
dk.set_new_model_names(pair, new_trained_timerange)
|
||||||
self.dd.pair_dict[pair]["first"] = False
|
self.dd.pair_dict[pair]["first"] = False
|
||||||
if self.dd.pair_dict[pair]["priority"] == 1 and self.scanning:
|
if self.dd.pair_dict[pair]["priority"] == 1 and self.scanning:
|
||||||
with self.lock:
|
self.dd.pair_to_end_of_training_queue(pair)
|
||||||
self.dd.pair_to_end_of_training_queue(pair)
|
|
||||||
self.dd.save_data(model, pair, dk)
|
self.dd.save_data(model, pair, dk)
|
||||||
|
|
||||||
if self.freqai_info.get("purge_old_models", False):
|
if self.freqai_info.get("purge_old_models", False):
|
||||||
self.dd.purge_old_models()
|
self.dd.purge_old_models()
|
||||||
|
|
||||||
def set_initial_historic_predictions(
|
def set_initial_historic_predictions(
|
||||||
self, df: DataFrame, model: Any, dk: FreqaiDataKitchen, pair: str
|
self, pred_df: DataFrame, dk: FreqaiDataKitchen, pair: str
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
This function is called only if the datadrawer failed to load an
|
This function is called only if the datadrawer failed to load an
|
||||||
@ -528,14 +544,6 @@ class IFreqaiModel(ABC):
|
|||||||
:param: dk: FreqaiDataKitchen = object containing methods for data analysis
|
:param: dk: FreqaiDataKitchen = object containing methods for data analysis
|
||||||
:param: pair: str = current pair
|
:param: pair: str = current pair
|
||||||
"""
|
"""
|
||||||
num_candles = self.freqai_info.get('fit_live_predictions_candles', 600)
|
|
||||||
if not num_candles:
|
|
||||||
num_candles = 600
|
|
||||||
df_tail = df.tail(num_candles)
|
|
||||||
trained_predictions = model.predict(df_tail)
|
|
||||||
pred_df = DataFrame(trained_predictions, columns=dk.label_list)
|
|
||||||
|
|
||||||
pred_df = dk.denormalize_labels_from_metadata(pred_df)
|
|
||||||
|
|
||||||
self.dd.historic_predictions[pair] = pred_df
|
self.dd.historic_predictions[pair] = pred_df
|
||||||
hist_preds_df = self.dd.historic_predictions[pair]
|
hist_preds_df = self.dd.historic_predictions[pair]
|
||||||
@ -554,15 +562,27 @@ class IFreqaiModel(ABC):
|
|||||||
for return_str in dk.data['extra_returns_per_train']:
|
for return_str in dk.data['extra_returns_per_train']:
|
||||||
hist_preds_df[return_str] = 0
|
hist_preds_df[return_str] = 0
|
||||||
|
|
||||||
def fit_live_predictions(self, dk: FreqaiDataKitchen) -> None:
|
# # for keras type models, the conv_window needs to be prepended so
|
||||||
|
# # viewing is correct in frequi
|
||||||
|
if self.freqai_info.get('keras', False):
|
||||||
|
n_lost_points = self.freqai_info.get('conv_width', 2)
|
||||||
|
zeros_df = DataFrame(np.zeros((n_lost_points, len(hist_preds_df.columns))),
|
||||||
|
columns=hist_preds_df.columns)
|
||||||
|
self.dd.historic_predictions[pair] = pd.concat(
|
||||||
|
[zeros_df, hist_preds_df], axis=0, ignore_index=True)
|
||||||
|
|
||||||
|
def fit_live_predictions(self, dk: FreqaiDataKitchen, pair: str) -> None:
|
||||||
"""
|
"""
|
||||||
Fit the labels with a gaussian distribution
|
Fit the labels with a gaussian distribution
|
||||||
"""
|
"""
|
||||||
import scipy as spy
|
import scipy as spy
|
||||||
|
|
||||||
|
# add classes from classifier label types if used
|
||||||
|
full_labels = dk.label_list + dk.unique_class_list
|
||||||
|
|
||||||
num_candles = self.freqai_info.get("fit_live_predictions_candles", 100)
|
num_candles = self.freqai_info.get("fit_live_predictions_candles", 100)
|
||||||
dk.data["labels_mean"], dk.data["labels_std"] = {}, {}
|
dk.data["labels_mean"], dk.data["labels_std"] = {}, {}
|
||||||
for label in dk.label_list:
|
for label in full_labels:
|
||||||
if self.dd.historic_predictions[dk.pair][label].dtype == object:
|
if self.dd.historic_predictions[dk.pair][label].dtype == object:
|
||||||
continue
|
continue
|
||||||
f = spy.stats.norm.fit(self.dd.historic_predictions[dk.pair][label].tail(num_candles))
|
f = spy.stats.norm.fit(self.dd.historic_predictions[dk.pair][label].tail(num_candles))
|
||||||
@ -570,6 +590,28 @@ class IFreqaiModel(ABC):
|
|||||||
|
|
||||||
return
|
return
|
||||||
|
|
||||||
|
def inference_timer(self, do='start'):
|
||||||
|
"""
|
||||||
|
Timer designed to track the cumulative time spent in FreqAI for one pass through
|
||||||
|
the whitelist. This will check if the time spent is more than 1/4 the time
|
||||||
|
of a single candle, and if so, it will warn the user of degraded performance
|
||||||
|
"""
|
||||||
|
if do == 'start':
|
||||||
|
self.pair_it += 1
|
||||||
|
self.begin_time = time.time()
|
||||||
|
elif do == 'stop':
|
||||||
|
end = time.time()
|
||||||
|
self.inference_time += (end - self.begin_time)
|
||||||
|
if self.pair_it == self.total_pairs:
|
||||||
|
logger.info(
|
||||||
|
f'Total time spent inferencing pairlist {self.inference_time:.2f} seconds')
|
||||||
|
if self.inference_time > 0.25 * self.base_tf_seconds:
|
||||||
|
logger.warning('Inference took over 25/% of the candle time. Reduce pairlist to'
|
||||||
|
' avoid blinding open trades and degrading performance.')
|
||||||
|
self.pair_it = 0
|
||||||
|
self.inference_time = 0
|
||||||
|
return
|
||||||
|
|
||||||
# Following methods which are overridden by user made prediction models.
|
# Following methods which are overridden by user made prediction models.
|
||||||
# See freqai/prediction_models/CatboostPredictionModel.py for an example.
|
# See freqai/prediction_models/CatboostPredictionModel.py for an example.
|
||||||
|
|
||||||
|
99
freqtrade/freqai/prediction_models/BaseClassifierModel.py
Normal file
99
freqtrade/freqai/prediction_models/BaseClassifierModel.py
Normal file
@ -0,0 +1,99 @@
|
|||||||
|
import logging
|
||||||
|
from typing import Any, Tuple
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
import numpy.typing as npt
|
||||||
|
import pandas as pd
|
||||||
|
from pandas import DataFrame
|
||||||
|
|
||||||
|
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
|
||||||
|
from freqtrade.freqai.freqai_interface import IFreqaiModel
|
||||||
|
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class BaseClassifierModel(IFreqaiModel):
|
||||||
|
"""
|
||||||
|
Base class for regression type models (e.g. Catboost, LightGBM, XGboost etc.).
|
||||||
|
User *must* inherit from this class and set fit() and predict(). See example scripts
|
||||||
|
such as prediction_models/CatboostPredictionModel.py for guidance.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def train(
|
||||||
|
self, unfiltered_dataframe: DataFrame, pair: str, dk: FreqaiDataKitchen
|
||||||
|
) -> Any:
|
||||||
|
"""
|
||||||
|
Filter the training data and train a model to it. Train makes heavy use of the datakitchen
|
||||||
|
for storing, saving, loading, and analyzing the data.
|
||||||
|
:param unfiltered_dataframe: Full dataframe for the current training period
|
||||||
|
:param metadata: pair metadata from strategy.
|
||||||
|
:return:
|
||||||
|
:model: Trained model which can be used to inference (self.predict)
|
||||||
|
"""
|
||||||
|
|
||||||
|
logger.info("-------------------- Starting training " f"{pair} --------------------")
|
||||||
|
|
||||||
|
# filter the features requested by user in the configuration file and elegantly handle NaNs
|
||||||
|
features_filtered, labels_filtered = dk.filter_features(
|
||||||
|
unfiltered_dataframe,
|
||||||
|
dk.training_features_list,
|
||||||
|
dk.label_list,
|
||||||
|
training_filter=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
start_date = unfiltered_dataframe["date"].iloc[0].strftime("%Y-%m-%d")
|
||||||
|
end_date = unfiltered_dataframe["date"].iloc[-1].strftime("%Y-%m-%d")
|
||||||
|
logger.info(f"-------------------- Training on data from {start_date} to "
|
||||||
|
f"{end_date}--------------------")
|
||||||
|
# split data into train/test data.
|
||||||
|
data_dictionary = dk.make_train_test_datasets(features_filtered, labels_filtered)
|
||||||
|
if not self.freqai_info.get('fit_live_predictions', 0) or not self.live:
|
||||||
|
dk.fit_labels()
|
||||||
|
# normalize all data based on train_dataset only
|
||||||
|
data_dictionary = dk.normalize_data(data_dictionary)
|
||||||
|
|
||||||
|
# optional additional data cleaning/analysis
|
||||||
|
self.data_cleaning_train(dk)
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
f'Training model on {len(dk.data_dictionary["train_features"].columns)}' " features"
|
||||||
|
)
|
||||||
|
logger.info(f'Training model on {len(data_dictionary["train_features"])} data points')
|
||||||
|
|
||||||
|
model = self.fit(data_dictionary)
|
||||||
|
|
||||||
|
logger.info(f"--------------------done training {pair}--------------------")
|
||||||
|
|
||||||
|
return model
|
||||||
|
|
||||||
|
def predict(
|
||||||
|
self, unfiltered_dataframe: DataFrame, dk: FreqaiDataKitchen, first: bool = False
|
||||||
|
) -> Tuple[DataFrame, npt.NDArray[np.int_]]:
|
||||||
|
"""
|
||||||
|
Filter the prediction features data and predict with it.
|
||||||
|
:param: unfiltered_dataframe: Full dataframe for the current backtest period.
|
||||||
|
:return:
|
||||||
|
:pred_df: dataframe containing the predictions
|
||||||
|
:do_predict: np.array of 1s and 0s to indicate places where freqai needed to remove
|
||||||
|
data (NaNs) or felt uncertain about data (PCA and DI index)
|
||||||
|
"""
|
||||||
|
|
||||||
|
dk.find_features(unfiltered_dataframe)
|
||||||
|
filtered_dataframe, _ = dk.filter_features(
|
||||||
|
unfiltered_dataframe, dk.training_features_list, training_filter=False
|
||||||
|
)
|
||||||
|
filtered_dataframe = dk.normalize_data_from_metadata(filtered_dataframe)
|
||||||
|
dk.data_dictionary["prediction_features"] = filtered_dataframe
|
||||||
|
|
||||||
|
self.data_cleaning_predict(dk, filtered_dataframe)
|
||||||
|
|
||||||
|
predictions = self.model.predict(dk.data_dictionary["prediction_features"])
|
||||||
|
pred_df = DataFrame(predictions, columns=dk.label_list)
|
||||||
|
|
||||||
|
predictions_prob = self.model.predict_proba(dk.data_dictionary["prediction_features"])
|
||||||
|
pred_df_prob = DataFrame(predictions_prob, columns=self.model.classes_)
|
||||||
|
|
||||||
|
pred_df = pd.concat([pred_df, pred_df_prob], axis=1)
|
||||||
|
|
||||||
|
return (pred_df, dk.do_predict)
|
@ -62,15 +62,6 @@ class BaseRegressionModel(IFreqaiModel):
|
|||||||
|
|
||||||
model = self.fit(data_dictionary)
|
model = self.fit(data_dictionary)
|
||||||
|
|
||||||
if pair not in self.dd.historic_predictions:
|
|
||||||
self.set_initial_historic_predictions(
|
|
||||||
data_dictionary['train_features'], model, dk, pair)
|
|
||||||
|
|
||||||
if self.freqai_info.get('fit_live_predictions_candles', 0) and self.live:
|
|
||||||
self.fit_live_predictions(dk)
|
|
||||||
|
|
||||||
self.dd.save_historic_predictions_to_disk()
|
|
||||||
|
|
||||||
logger.info(f"--------------------done training {pair}--------------------")
|
logger.info(f"--------------------done training {pair}--------------------")
|
||||||
|
|
||||||
return model
|
return model
|
||||||
|
@ -24,11 +24,11 @@ class BaseTensorFlowModel(IFreqaiModel):
|
|||||||
for storing, saving, loading, and analyzing the data.
|
for storing, saving, loading, and analyzing the data.
|
||||||
:param unfiltered_dataframe: Full dataframe for the current training period
|
:param unfiltered_dataframe: Full dataframe for the current training period
|
||||||
:param metadata: pair metadata from strategy.
|
:param metadata: pair metadata from strategy.
|
||||||
:returns:
|
:return:
|
||||||
:model: Trained model which can be used to inference (self.predict)
|
:model: Trained model which can be used to inference (self.predict)
|
||||||
"""
|
"""
|
||||||
|
|
||||||
logger.info("--------------------Starting training " f"{pair} --------------------")
|
logger.info("-------------------- Starting training " f"{pair} --------------------")
|
||||||
|
|
||||||
# filter the features requested by user in the configuration file and elegantly handle NaNs
|
# filter the features requested by user in the configuration file and elegantly handle NaNs
|
||||||
features_filtered, labels_filtered = dk.filter_features(
|
features_filtered, labels_filtered = dk.filter_features(
|
||||||
@ -38,9 +38,14 @@ class BaseTensorFlowModel(IFreqaiModel):
|
|||||||
training_filter=True,
|
training_filter=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
start_date = unfiltered_dataframe["date"].iloc[0].strftime("%Y-%m-%d")
|
||||||
|
end_date = unfiltered_dataframe["date"].iloc[-1].strftime("%Y-%m-%d")
|
||||||
|
logger.info(f"-------------------- Training on data from {start_date} to "
|
||||||
|
f"{end_date}--------------------")
|
||||||
# split data into train/test data.
|
# split data into train/test data.
|
||||||
data_dictionary = dk.make_train_test_datasets(features_filtered, labels_filtered)
|
data_dictionary = dk.make_train_test_datasets(features_filtered, labels_filtered)
|
||||||
|
if not self.freqai_info.get('fit_live_predictions', 0) or not self.live:
|
||||||
|
dk.fit_labels()
|
||||||
# normalize all data based on train_dataset only
|
# normalize all data based on train_dataset only
|
||||||
data_dictionary = dk.normalize_data(data_dictionary)
|
data_dictionary = dk.normalize_data(data_dictionary)
|
||||||
|
|
||||||
@ -54,17 +59,6 @@ class BaseTensorFlowModel(IFreqaiModel):
|
|||||||
|
|
||||||
model = self.fit(data_dictionary)
|
model = self.fit(data_dictionary)
|
||||||
|
|
||||||
if pair not in self.dd.historic_predictions:
|
|
||||||
self.set_initial_historic_predictions(
|
|
||||||
data_dictionary['train_features'], model, dk, pair)
|
|
||||||
|
|
||||||
if self.freqai_info.get('fit_live_predictions_candles', 0) and self.live:
|
|
||||||
self.fit_live_predictions(dk)
|
|
||||||
else:
|
|
||||||
dk.fit_labels()
|
|
||||||
|
|
||||||
self.dd.save_historic_predictions_to_disk()
|
|
||||||
|
|
||||||
logger.info(f"--------------------done training {pair}--------------------")
|
logger.info(f"--------------------done training {pair}--------------------")
|
||||||
|
|
||||||
return model
|
return model
|
||||||
|
@ -3,13 +3,13 @@ from typing import Any, Dict
|
|||||||
|
|
||||||
from catboost import CatBoostClassifier, Pool
|
from catboost import CatBoostClassifier, Pool
|
||||||
|
|
||||||
from freqtrade.freqai.prediction_models.BaseRegressionModel import BaseRegressionModel
|
from freqtrade.freqai.prediction_models.BaseClassifierModel import BaseClassifierModel
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class CatboostClassifier(BaseRegressionModel):
|
class CatboostClassifier(BaseClassifierModel):
|
||||||
"""
|
"""
|
||||||
User created prediction model. The class needs to override three necessary
|
User created prediction model. The class needs to override three necessary
|
||||||
functions, predict(), train(), fit(). The class inherits ModelHandler which
|
functions, predict(), train(), fit(). The class inherits ModelHandler which
|
||||||
|
@ -3,13 +3,13 @@ from typing import Any, Dict
|
|||||||
|
|
||||||
from lightgbm import LGBMClassifier
|
from lightgbm import LGBMClassifier
|
||||||
|
|
||||||
from freqtrade.freqai.prediction_models.BaseRegressionModel import BaseRegressionModel
|
from freqtrade.freqai.prediction_models.BaseClassifierModel import BaseClassifierModel
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class LightGBMClassifier(BaseRegressionModel):
|
class LightGBMClassifier(BaseClassifierModel):
|
||||||
"""
|
"""
|
||||||
User created prediction model. The class needs to override three necessary
|
User created prediction model. The class needs to override three necessary
|
||||||
functions, predict(), train(), fit(). The class inherits ModelHandler which
|
functions, predict(), train(), fit(). The class inherits ModelHandler which
|
||||||
|
@ -1475,12 +1475,6 @@ class FreqtradeBot(LoggingMixin):
|
|||||||
ExitType.STOP_LOSS, ExitType.TRAILING_STOP_LOSS, ExitType.LIQUIDATION):
|
ExitType.STOP_LOSS, ExitType.TRAILING_STOP_LOSS, ExitType.LIQUIDATION):
|
||||||
exit_type = 'stoploss'
|
exit_type = 'stoploss'
|
||||||
|
|
||||||
# if stoploss is on exchange and we are on dry_run mode,
|
|
||||||
# we consider the sell price stop price
|
|
||||||
if (self.config['dry_run'] and exit_type == 'stoploss'
|
|
||||||
and self.strategy.order_types['stoploss_on_exchange']):
|
|
||||||
limit = trade.stoploss_or_liquidation
|
|
||||||
|
|
||||||
# set custom_exit_price if available
|
# set custom_exit_price if available
|
||||||
proposed_limit_rate = limit
|
proposed_limit_rate = limit
|
||||||
current_profit = trade.calc_profit_ratio(limit)
|
current_profit = trade.calc_profit_ratio(limit)
|
||||||
|
@ -4,14 +4,14 @@ Volume PairList provider
|
|||||||
Provides dynamic pair list based on trade volumes
|
Provides dynamic pair list based on trade volumes
|
||||||
"""
|
"""
|
||||||
import logging
|
import logging
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
from typing import Any, Dict, List
|
from typing import Any, Dict, List
|
||||||
|
|
||||||
import arrow
|
|
||||||
from cachetools import TTLCache
|
from cachetools import TTLCache
|
||||||
|
|
||||||
from freqtrade.constants import ListPairsWithTimeframes
|
from freqtrade.constants import ListPairsWithTimeframes
|
||||||
from freqtrade.exceptions import OperationalException
|
from freqtrade.exceptions import OperationalException
|
||||||
from freqtrade.exchange import timeframe_to_minutes
|
from freqtrade.exchange import timeframe_to_minutes, timeframe_to_prev_date
|
||||||
from freqtrade.misc import format_ms_time
|
from freqtrade.misc import format_ms_time
|
||||||
from freqtrade.plugins.pairlist.IPairList import IPairList
|
from freqtrade.plugins.pairlist.IPairList import IPairList
|
||||||
|
|
||||||
@ -158,16 +158,16 @@ class VolumePairList(IPairList):
|
|||||||
filtered_tickers: List[Dict[str, Any]] = [{'symbol': k} for k in pairlist]
|
filtered_tickers: List[Dict[str, Any]] = [{'symbol': k} for k in pairlist]
|
||||||
|
|
||||||
# get lookback period in ms, for exchange ohlcv fetch
|
# get lookback period in ms, for exchange ohlcv fetch
|
||||||
since_ms = int(arrow.utcnow()
|
since_ms = int(timeframe_to_prev_date(
|
||||||
.floor('minute')
|
self._lookback_timeframe,
|
||||||
.shift(minutes=-(self._lookback_period * self._tf_in_min)
|
datetime.now(timezone.utc) + timedelta(
|
||||||
- self._tf_in_min)
|
minutes=-(self._lookback_period * self._tf_in_min) - self._tf_in_min)
|
||||||
.int_timestamp) * 1000
|
).timestamp()) * 1000
|
||||||
|
|
||||||
to_ms = int(arrow.utcnow()
|
to_ms = int(timeframe_to_prev_date(
|
||||||
.floor('minute')
|
self._lookback_timeframe,
|
||||||
.shift(minutes=-self._tf_in_min)
|
datetime.now(timezone.utc) - timedelta(minutes=self._tf_in_min)
|
||||||
.int_timestamp) * 1000
|
).timestamp()) * 1000
|
||||||
|
|
||||||
# todo: utc date output for starting date
|
# todo: utc date output for starting date
|
||||||
self.log_once(f"Using volume range of {self._lookback_period} candles, timeframe: "
|
self.log_once(f"Using volume range of {self._lookback_period} candles, timeframe: "
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
import re
|
import re
|
||||||
from typing import List
|
from typing import Any, Dict, List
|
||||||
|
|
||||||
|
|
||||||
def expand_pairlist(wildcardpl: List[str], available_pairs: List[str],
|
def expand_pairlist(wildcardpl: List[str], available_pairs: List[str],
|
||||||
@ -42,7 +42,7 @@ def expand_pairlist(wildcardpl: List[str], available_pairs: List[str],
|
|||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
def dynamic_expand_pairlist(config: dict, markets: list) -> List[str]:
|
def dynamic_expand_pairlist(config: Dict[str, Any], markets: List[str]) -> List[str]:
|
||||||
expanded_pairs = expand_pairlist(config['pairs'], markets)
|
expanded_pairs = expand_pairlist(config['pairs'], markets)
|
||||||
if config.get('freqai', {}).get('enabled', False):
|
if config.get('freqai', {}).get('enabled', False):
|
||||||
corr_pairlist = config['freqai']['feature_parameters']['include_corr_pairlist']
|
corr_pairlist = config['freqai']['feature_parameters']['include_corr_pairlist']
|
||||||
|
@ -120,7 +120,8 @@ class Telegram(RPCHandler):
|
|||||||
r'/daily$', r'/daily \d+$', r'/profit$', r'/profit \d+',
|
r'/daily$', r'/daily \d+$', r'/profit$', r'/profit \d+',
|
||||||
r'/stats$', r'/count$', r'/locks$', r'/balance$',
|
r'/stats$', r'/count$', r'/locks$', r'/balance$',
|
||||||
r'/stopbuy$', r'/reload_config$', r'/show_config$',
|
r'/stopbuy$', r'/reload_config$', r'/show_config$',
|
||||||
r'/logs$', r'/whitelist$', r'/blacklist$', r'/bl_delete$',
|
r'/logs$', r'/whitelist$', r'/whitelist(\ssorted|\sbaseonly)+$',
|
||||||
|
r'/blacklist$', r'/bl_delete$',
|
||||||
r'/weekly$', r'/weekly \d+$', r'/monthly$', r'/monthly \d+$',
|
r'/weekly$', r'/weekly \d+$', r'/monthly$', r'/monthly \d+$',
|
||||||
r'/forcebuy$', r'/forcelong$', r'/forceshort$',
|
r'/forcebuy$', r'/forcelong$', r'/forceshort$',
|
||||||
r'/forcesell$', r'/forceexit$',
|
r'/forcesell$', r'/forceexit$',
|
||||||
@ -1368,6 +1369,12 @@ class Telegram(RPCHandler):
|
|||||||
try:
|
try:
|
||||||
whitelist = self._rpc._rpc_whitelist()
|
whitelist = self._rpc._rpc_whitelist()
|
||||||
|
|
||||||
|
if context.args:
|
||||||
|
if "sorted" in context.args:
|
||||||
|
whitelist['whitelist'] = sorted(whitelist['whitelist'])
|
||||||
|
if "baseonly" in context.args:
|
||||||
|
whitelist['whitelist'] = [pair.split("/")[0] for pair in whitelist['whitelist']]
|
||||||
|
|
||||||
message = f"Using whitelist `{whitelist['method']}` with {whitelist['length']} pairs\n"
|
message = f"Using whitelist `{whitelist['method']}` with {whitelist['length']} pairs\n"
|
||||||
message += f"`{', '.join(whitelist['whitelist'])}`"
|
message += f"`{', '.join(whitelist['whitelist'])}`"
|
||||||
|
|
||||||
@ -1487,7 +1494,8 @@ class Telegram(RPCHandler):
|
|||||||
"*/fx <trade_id>|all:* `Alias to /forceexit`\n"
|
"*/fx <trade_id>|all:* `Alias to /forceexit`\n"
|
||||||
f"{force_enter_text if self._config.get('force_entry_enable', False) else ''}"
|
f"{force_enter_text if self._config.get('force_entry_enable', False) else ''}"
|
||||||
"*/delete <trade_id>:* `Instantly delete the given trade in the database`\n"
|
"*/delete <trade_id>:* `Instantly delete the given trade in the database`\n"
|
||||||
"*/whitelist:* `Show current whitelist` \n"
|
"*/whitelist [sorted] [baseonly]:* `Show current whitelist. Optionally in "
|
||||||
|
"order and/or only displaying the base currency of each pairing.`\n"
|
||||||
"*/blacklist [pair]:* `Show current blacklist, or adds one or more pairs "
|
"*/blacklist [pair]:* `Show current blacklist, or adds one or more pairs "
|
||||||
"to the blacklist.` \n"
|
"to the blacklist.` \n"
|
||||||
"*/blacklist_delete [pairs]| /bl_delete [pairs]:* "
|
"*/blacklist_delete [pairs]| /bl_delete [pairs]:* "
|
||||||
@ -1524,7 +1532,7 @@ class Telegram(RPCHandler):
|
|||||||
"*/weekly <n>:* `Shows statistics per week, over the last n weeks`\n"
|
"*/weekly <n>:* `Shows statistics per week, over the last n weeks`\n"
|
||||||
"*/monthly <n>:* `Shows statistics per month, over the last n months`\n"
|
"*/monthly <n>:* `Shows statistics per month, over the last n months`\n"
|
||||||
"*/stats:* `Shows Wins / losses by Sell reason as well as "
|
"*/stats:* `Shows Wins / losses by Sell reason as well as "
|
||||||
"Avg. holding durationsfor buys and sells.`\n"
|
"Avg. holding durations for buys and sells.`\n"
|
||||||
"*/help:* `This help message`\n"
|
"*/help:* `This help message`\n"
|
||||||
"*/version:* `Show version`"
|
"*/version:* `Show version`"
|
||||||
)
|
)
|
||||||
|
@ -157,7 +157,8 @@ class IStrategy(ABC, HyperStrategyMixin):
|
|||||||
class DummyClass():
|
class DummyClass():
|
||||||
def start(self, *args, **kwargs):
|
def start(self, *args, **kwargs):
|
||||||
raise OperationalException(
|
raise OperationalException(
|
||||||
'freqAI is not enabled. Please enable it in your config to use this strategy.')
|
'freqAI is not enabled. '
|
||||||
|
'Please enable it in your config to use this strategy.')
|
||||||
self.freqai = DummyClass() # type: ignore
|
self.freqai = DummyClass() # type: ignore
|
||||||
|
|
||||||
def ft_bot_start(self, **kwargs) -> None:
|
def ft_bot_start(self, **kwargs) -> None:
|
||||||
|
@ -82,99 +82,98 @@ class FreqaiExampleStrategy(IStrategy):
|
|||||||
|
|
||||||
coin = pair.split('/')[0]
|
coin = pair.split('/')[0]
|
||||||
|
|
||||||
with self.freqai.lock:
|
if informative is None:
|
||||||
if informative is None:
|
informative = self.dp.get_pair_dataframe(pair, tf)
|
||||||
informative = self.dp.get_pair_dataframe(pair, tf)
|
|
||||||
|
|
||||||
# first loop is automatically duplicating indicators for time periods
|
# first loop is automatically duplicating indicators for time periods
|
||||||
for t in self.freqai_info["feature_parameters"]["indicator_periods_candles"]:
|
for t in self.freqai_info["feature_parameters"]["indicator_periods_candles"]:
|
||||||
|
|
||||||
t = int(t)
|
t = int(t)
|
||||||
informative[f"%-{coin}rsi-period_{t}"] = ta.RSI(informative, timeperiod=t)
|
informative[f"%-{coin}rsi-period_{t}"] = ta.RSI(informative, timeperiod=t)
|
||||||
informative[f"%-{coin}mfi-period_{t}"] = ta.MFI(informative, timeperiod=t)
|
informative[f"%-{coin}mfi-period_{t}"] = ta.MFI(informative, timeperiod=t)
|
||||||
informative[f"%-{coin}adx-period_{t}"] = ta.ADX(informative, window=t)
|
informative[f"%-{coin}adx-period_{t}"] = ta.ADX(informative, window=t)
|
||||||
informative[f"%-{coin}sma-period_{t}"] = ta.SMA(informative, timeperiod=t)
|
informative[f"%-{coin}sma-period_{t}"] = ta.SMA(informative, timeperiod=t)
|
||||||
informative[f"%-{coin}ema-period_{t}"] = ta.EMA(informative, timeperiod=t)
|
informative[f"%-{coin}ema-period_{t}"] = ta.EMA(informative, timeperiod=t)
|
||||||
|
|
||||||
informative[f"%-{coin}mfi-period_{t}"] = ta.MFI(informative, timeperiod=t)
|
informative[f"%-{coin}mfi-period_{t}"] = ta.MFI(informative, timeperiod=t)
|
||||||
|
|
||||||
bollinger = qtpylib.bollinger_bands(
|
bollinger = qtpylib.bollinger_bands(
|
||||||
qtpylib.typical_price(informative), window=t, stds=2.2
|
qtpylib.typical_price(informative), window=t, stds=2.2
|
||||||
)
|
)
|
||||||
informative[f"{coin}bb_lowerband-period_{t}"] = bollinger["lower"]
|
informative[f"{coin}bb_lowerband-period_{t}"] = bollinger["lower"]
|
||||||
informative[f"{coin}bb_middleband-period_{t}"] = bollinger["mid"]
|
informative[f"{coin}bb_middleband-period_{t}"] = bollinger["mid"]
|
||||||
informative[f"{coin}bb_upperband-period_{t}"] = bollinger["upper"]
|
informative[f"{coin}bb_upperband-period_{t}"] = bollinger["upper"]
|
||||||
|
|
||||||
informative[f"%-{coin}bb_width-period_{t}"] = (
|
informative[f"%-{coin}bb_width-period_{t}"] = (
|
||||||
informative[f"{coin}bb_upperband-period_{t}"]
|
informative[f"{coin}bb_upperband-period_{t}"]
|
||||||
- informative[f"{coin}bb_lowerband-period_{t}"]
|
- informative[f"{coin}bb_lowerband-period_{t}"]
|
||||||
) / informative[f"{coin}bb_middleband-period_{t}"]
|
) / informative[f"{coin}bb_middleband-period_{t}"]
|
||||||
informative[f"%-{coin}close-bb_lower-period_{t}"] = (
|
informative[f"%-{coin}close-bb_lower-period_{t}"] = (
|
||||||
informative["close"] / informative[f"{coin}bb_lowerband-period_{t}"]
|
informative["close"] / informative[f"{coin}bb_lowerband-period_{t}"]
|
||||||
)
|
)
|
||||||
|
|
||||||
informative[f"%-{coin}roc-period_{t}"] = ta.ROC(informative, timeperiod=t)
|
informative[f"%-{coin}roc-period_{t}"] = ta.ROC(informative, timeperiod=t)
|
||||||
|
|
||||||
informative[f"%-{coin}relative_volume-period_{t}"] = (
|
informative[f"%-{coin}relative_volume-period_{t}"] = (
|
||||||
informative["volume"] / informative["volume"].rolling(t).mean()
|
informative["volume"] / informative["volume"].rolling(t).mean()
|
||||||
)
|
)
|
||||||
|
|
||||||
informative[f"%-{coin}pct-change"] = informative["close"].pct_change()
|
informative[f"%-{coin}pct-change"] = informative["close"].pct_change()
|
||||||
informative[f"%-{coin}raw_volume"] = informative["volume"]
|
informative[f"%-{coin}raw_volume"] = informative["volume"]
|
||||||
informative[f"%-{coin}raw_price"] = informative["close"]
|
informative[f"%-{coin}raw_price"] = informative["close"]
|
||||||
|
|
||||||
indicators = [col for col in informative if col.startswith("%")]
|
indicators = [col for col in informative if col.startswith("%")]
|
||||||
# This loop duplicates and shifts all indicators to add a sense of recency to data
|
# This loop duplicates and shifts all indicators to add a sense of recency to data
|
||||||
for n in range(self.freqai_info["feature_parameters"]["include_shifted_candles"] + 1):
|
for n in range(self.freqai_info["feature_parameters"]["include_shifted_candles"] + 1):
|
||||||
if n == 0:
|
if n == 0:
|
||||||
continue
|
continue
|
||||||
informative_shift = informative[indicators].shift(n)
|
informative_shift = informative[indicators].shift(n)
|
||||||
informative_shift = informative_shift.add_suffix("_shift-" + str(n))
|
informative_shift = informative_shift.add_suffix("_shift-" + str(n))
|
||||||
informative = pd.concat((informative, informative_shift), axis=1)
|
informative = pd.concat((informative, informative_shift), axis=1)
|
||||||
|
|
||||||
df = merge_informative_pair(df, informative, self.config["timeframe"], tf, ffill=True)
|
df = merge_informative_pair(df, informative, self.config["timeframe"], tf, ffill=True)
|
||||||
skip_columns = [
|
skip_columns = [
|
||||||
(s + "_" + tf) for s in ["date", "open", "high", "low", "close", "volume"]
|
(s + "_" + tf) for s in ["date", "open", "high", "low", "close", "volume"]
|
||||||
]
|
]
|
||||||
df = df.drop(columns=skip_columns)
|
df = df.drop(columns=skip_columns)
|
||||||
|
|
||||||
# Add generalized indicators here (because in live, it will call this
|
# Add generalized indicators here (because in live, it will call this
|
||||||
# function to populate indicators during training). Notice how we ensure not to
|
# function to populate indicators during training). Notice how we ensure not to
|
||||||
# add them multiple times
|
# add them multiple times
|
||||||
if set_generalized_indicators:
|
if set_generalized_indicators:
|
||||||
df["%-day_of_week"] = (df["date"].dt.dayofweek + 1) / 7
|
df["%-day_of_week"] = (df["date"].dt.dayofweek + 1) / 7
|
||||||
df["%-hour_of_day"] = (df["date"].dt.hour + 1) / 25
|
df["%-hour_of_day"] = (df["date"].dt.hour + 1) / 25
|
||||||
|
|
||||||
# user adds targets here by prepending them with &- (see convention below)
|
# user adds targets here by prepending them with &- (see convention below)
|
||||||
df["&-s_close"] = (
|
df["&-s_close"] = (
|
||||||
df["close"]
|
df["close"]
|
||||||
.shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
.shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
.rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
.rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
.mean()
|
.mean()
|
||||||
/ df["close"]
|
/ df["close"]
|
||||||
- 1
|
- 1
|
||||||
)
|
)
|
||||||
|
|
||||||
# Classifiers are typically set up with strings as targets:
|
# Classifiers are typically set up with strings as targets:
|
||||||
# df['&s-up_or_down'] = np.where( df["close"].shift(-100) >
|
# df['&s-up_or_down'] = np.where( df["close"].shift(-100) >
|
||||||
# df["close"], 'up', 'down')
|
# df["close"], 'up', 'down')
|
||||||
|
|
||||||
# If user wishes to use multiple targets, they can add more by
|
# If user wishes to use multiple targets, they can add more by
|
||||||
# appending more columns with '&'. User should keep in mind that multi targets
|
# appending more columns with '&'. User should keep in mind that multi targets
|
||||||
# requires a multioutput prediction model such as
|
# requires a multioutput prediction model such as
|
||||||
# templates/CatboostPredictionMultiModel.py,
|
# templates/CatboostPredictionMultiModel.py,
|
||||||
|
|
||||||
# df["&-s_range"] = (
|
# df["&-s_range"] = (
|
||||||
# df["close"]
|
# df["close"]
|
||||||
# .shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
# .shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
# .rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
# .rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
# .max()
|
# .max()
|
||||||
# -
|
# -
|
||||||
# df["close"]
|
# df["close"]
|
||||||
# .shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
# .shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
# .rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
# .rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
# .min()
|
# .min()
|
||||||
# )
|
# )
|
||||||
|
|
||||||
return df
|
return df
|
||||||
|
|
||||||
@ -252,12 +251,11 @@ class FreqaiExampleStrategy(IStrategy):
|
|||||||
"prediction" + entry_tag not in pair_dict[pair]
|
"prediction" + entry_tag not in pair_dict[pair]
|
||||||
or pair_dict[pair]['extras']["prediction" + entry_tag] == 0
|
or pair_dict[pair]['extras']["prediction" + entry_tag] == 0
|
||||||
):
|
):
|
||||||
with self.freqai.lock:
|
pair_dict[pair]['extras']["prediction" + entry_tag] = abs(trade_candle["&-s_close"])
|
||||||
pair_dict[pair]['extras']["prediction" + entry_tag] = abs(trade_candle["&-s_close"])
|
if not follow_mode:
|
||||||
if not follow_mode:
|
self.freqai.dd.save_drawer_to_disk()
|
||||||
self.freqai.dd.save_drawer_to_disk()
|
else:
|
||||||
else:
|
self.freqai.dd.save_follower_dict_to_disk()
|
||||||
self.freqai.dd.save_follower_dict_to_disk()
|
|
||||||
|
|
||||||
roi_price = pair_dict[pair]['extras']["prediction" + entry_tag]
|
roi_price = pair_dict[pair]['extras']["prediction" + entry_tag]
|
||||||
roi_time = self.max_roi_time_long.value
|
roi_time = self.max_roi_time_long.value
|
||||||
@ -296,12 +294,11 @@ class FreqaiExampleStrategy(IStrategy):
|
|||||||
else:
|
else:
|
||||||
pair_dict = self.freqai.dd.follower_dict
|
pair_dict = self.freqai.dd.follower_dict
|
||||||
|
|
||||||
with self.freqai.lock:
|
pair_dict[pair]['extras']["prediction" + entry_tag] = 0
|
||||||
pair_dict[pair]['extras']["prediction" + entry_tag] = 0
|
if not follow_mode:
|
||||||
if not follow_mode:
|
self.freqai.dd.save_drawer_to_disk()
|
||||||
self.freqai.dd.save_drawer_to_disk()
|
else:
|
||||||
else:
|
self.freqai.dd.save_follower_dict_to_disk()
|
||||||
self.freqai.dd.save_follower_dict_to_disk()
|
|
||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
site_name: Freqtrade
|
site_name: Freqtrade
|
||||||
site_url: https://www.freqtrade.io/
|
site_url: https://www.freqtrade.io/
|
||||||
repo_url: https://github.com/freqtrade/freqtrade
|
repo_url: https://github.com/freqtrade/freqtrade
|
||||||
|
edit_uri: edit/develop/docs/
|
||||||
use_directory_urls: True
|
use_directory_urls: True
|
||||||
nav:
|
nav:
|
||||||
- Home: index.md
|
- Home: index.md
|
||||||
|
@ -20,7 +20,7 @@ isort==5.10.1
|
|||||||
time-machine==2.7.1
|
time-machine==2.7.1
|
||||||
|
|
||||||
# Convert jupyter notebooks to markdown documents
|
# Convert jupyter notebooks to markdown documents
|
||||||
nbconvert==6.5.0
|
nbconvert==6.5.3
|
||||||
|
|
||||||
# mypy types
|
# mypy types
|
||||||
types-cachetools==5.2.1
|
types-cachetools==5.2.1
|
||||||
|
@ -5,5 +5,5 @@
|
|||||||
scipy==1.9.0
|
scipy==1.9.0
|
||||||
scikit-learn==1.1.2
|
scikit-learn==1.1.2
|
||||||
scikit-optimize==0.9.0
|
scikit-optimize==0.9.0
|
||||||
filelock==3.7.1
|
filelock==3.8.0
|
||||||
progressbar2==4.0.0
|
progressbar2==4.0.0
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Include all requirements to run the bot.
|
# Include all requirements to run the bot.
|
||||||
-r requirements.txt
|
-r requirements.txt
|
||||||
|
|
||||||
plotly==5.9.0
|
plotly==5.10.0
|
||||||
|
@ -1,12 +1,12 @@
|
|||||||
numpy==1.23.1
|
numpy==1.23.2
|
||||||
pandas==1.4.3
|
pandas==1.4.3
|
||||||
pandas-ta==0.3.14b
|
pandas-ta==0.3.14b
|
||||||
|
|
||||||
ccxt==1.91.93
|
ccxt==1.92.20
|
||||||
# Pin cryptography for now due to rust build errors with piwheels
|
# Pin cryptography for now due to rust build errors with piwheels
|
||||||
cryptography==37.0.4
|
cryptography==37.0.4
|
||||||
aiohttp==3.8.1
|
aiohttp==3.8.1
|
||||||
SQLAlchemy==1.4.39
|
SQLAlchemy==1.4.40
|
||||||
python-telegram-bot==13.13
|
python-telegram-bot==13.13
|
||||||
arrow==1.2.2
|
arrow==1.2.2
|
||||||
cachetools==4.2.2
|
cachetools==4.2.2
|
||||||
@ -28,7 +28,7 @@ py_find_1st==1.1.5
|
|||||||
# Load ticker files 30% faster
|
# Load ticker files 30% faster
|
||||||
python-rapidjson==1.8
|
python-rapidjson==1.8
|
||||||
# Properly format api responses
|
# Properly format api responses
|
||||||
orjson==3.7.11
|
orjson==3.7.12
|
||||||
|
|
||||||
# Notify systemd
|
# Notify systemd
|
||||||
sdnotify==0.3.2
|
sdnotify==0.3.2
|
||||||
|
@ -638,7 +638,7 @@ def test_get_ui_download_url_direct(mocker):
|
|||||||
x, last_version = get_ui_download_url('0.0.3')
|
x, last_version = get_ui_download_url('0.0.3')
|
||||||
|
|
||||||
|
|
||||||
def test_download_data_keyboardInterrupt(mocker, caplog, markets):
|
def test_download_data_keyboardInterrupt(mocker, markets):
|
||||||
dl_mock = mocker.patch('freqtrade.commands.data_commands.refresh_backtest_ohlcv_data',
|
dl_mock = mocker.patch('freqtrade.commands.data_commands.refresh_backtest_ohlcv_data',
|
||||||
MagicMock(side_effect=KeyboardInterrupt))
|
MagicMock(side_effect=KeyboardInterrupt))
|
||||||
patch_exchange(mocker)
|
patch_exchange(mocker)
|
||||||
@ -651,12 +651,15 @@ def test_download_data_keyboardInterrupt(mocker, caplog, markets):
|
|||||||
"--pairs", "ETH/BTC", "XRP/BTC",
|
"--pairs", "ETH/BTC", "XRP/BTC",
|
||||||
]
|
]
|
||||||
with pytest.raises(SystemExit):
|
with pytest.raises(SystemExit):
|
||||||
start_download_data(get_args(args))
|
pargs = get_args(args)
|
||||||
|
pargs['config'] = None
|
||||||
|
|
||||||
|
start_download_data(pargs)
|
||||||
|
|
||||||
assert dl_mock.call_count == 1
|
assert dl_mock.call_count == 1
|
||||||
|
|
||||||
|
|
||||||
def test_download_data_timerange(mocker, caplog, markets):
|
def test_download_data_timerange(mocker, markets):
|
||||||
dl_mock = mocker.patch('freqtrade.commands.data_commands.refresh_backtest_ohlcv_data',
|
dl_mock = mocker.patch('freqtrade.commands.data_commands.refresh_backtest_ohlcv_data',
|
||||||
MagicMock(return_value=["ETH/BTC", "XRP/BTC"]))
|
MagicMock(return_value=["ETH/BTC", "XRP/BTC"]))
|
||||||
patch_exchange(mocker)
|
patch_exchange(mocker)
|
||||||
@ -672,7 +675,9 @@ def test_download_data_timerange(mocker, caplog, markets):
|
|||||||
]
|
]
|
||||||
with pytest.raises(OperationalException,
|
with pytest.raises(OperationalException,
|
||||||
match=r"--days and --timerange are mutually.*"):
|
match=r"--days and --timerange are mutually.*"):
|
||||||
start_download_data(get_args(args))
|
pargs = get_args(args)
|
||||||
|
pargs['config'] = None
|
||||||
|
start_download_data(pargs)
|
||||||
assert dl_mock.call_count == 0
|
assert dl_mock.call_count == 0
|
||||||
|
|
||||||
args = [
|
args = [
|
||||||
@ -681,7 +686,9 @@ def test_download_data_timerange(mocker, caplog, markets):
|
|||||||
"--pairs", "ETH/BTC", "XRP/BTC",
|
"--pairs", "ETH/BTC", "XRP/BTC",
|
||||||
"--days", "20",
|
"--days", "20",
|
||||||
]
|
]
|
||||||
start_download_data(get_args(args))
|
pargs = get_args(args)
|
||||||
|
pargs['config'] = None
|
||||||
|
start_download_data(pargs)
|
||||||
assert dl_mock.call_count == 1
|
assert dl_mock.call_count == 1
|
||||||
# 20days ago
|
# 20days ago
|
||||||
days_ago = arrow.get(arrow.now().shift(days=-20).date()).int_timestamp
|
days_ago = arrow.get(arrow.now().shift(days=-20).date()).int_timestamp
|
||||||
@ -694,7 +701,9 @@ def test_download_data_timerange(mocker, caplog, markets):
|
|||||||
"--pairs", "ETH/BTC", "XRP/BTC",
|
"--pairs", "ETH/BTC", "XRP/BTC",
|
||||||
"--timerange", "20200101-"
|
"--timerange", "20200101-"
|
||||||
]
|
]
|
||||||
start_download_data(get_args(args))
|
pargs = get_args(args)
|
||||||
|
pargs['config'] = None
|
||||||
|
start_download_data(pargs)
|
||||||
assert dl_mock.call_count == 1
|
assert dl_mock.call_count == 1
|
||||||
|
|
||||||
assert dl_mock.call_args_list[0][1]['timerange'].startts == arrow.Arrow(
|
assert dl_mock.call_args_list[0][1]['timerange'].startts == arrow.Arrow(
|
||||||
|
57
tests/freqai/test_freqai_backtesting.py
Normal file
57
tests/freqai/test_freqai_backtesting.py
Normal file
@ -0,0 +1,57 @@
|
|||||||
|
from copy import deepcopy
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import PropertyMock
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from freqtrade.commands.optimize_commands import start_backtesting
|
||||||
|
from freqtrade.exceptions import OperationalException
|
||||||
|
from freqtrade.optimize.backtesting import Backtesting
|
||||||
|
from tests.conftest import (CURRENT_TEST_STRATEGY, get_args, log_has_re, patch_exchange,
|
||||||
|
patched_configuration_load_config_file)
|
||||||
|
|
||||||
|
|
||||||
|
def test_freqai_backtest_start_backtest_list(freqai_conf, mocker, testdatadir):
|
||||||
|
patch_exchange(mocker)
|
||||||
|
|
||||||
|
mocker.patch('freqtrade.plugins.pairlistmanager.PairListManager.whitelist',
|
||||||
|
PropertyMock(return_value=['HULUMULU/USDT', 'XRP/USDT']))
|
||||||
|
# mocker.patch('freqtrade.optimize.backtesting.Backtesting.backtest', backtestmock)
|
||||||
|
|
||||||
|
patched_configuration_load_config_file(mocker, freqai_conf)
|
||||||
|
|
||||||
|
args = [
|
||||||
|
'backtesting',
|
||||||
|
'--config', 'config.json',
|
||||||
|
'--datadir', str(testdatadir),
|
||||||
|
'--strategy-path', str(Path(__file__).parents[1] / 'strategy/strats'),
|
||||||
|
'--timeframe', '1h',
|
||||||
|
'--strategy-list', CURRENT_TEST_STRATEGY
|
||||||
|
]
|
||||||
|
args = get_args(args)
|
||||||
|
with pytest.raises(OperationalException,
|
||||||
|
match=r"You can't use strategy_list and freqai at the same time\."):
|
||||||
|
start_backtesting(args)
|
||||||
|
|
||||||
|
|
||||||
|
def test_freqai_backtest_load_data(freqai_conf, mocker, caplog):
|
||||||
|
patch_exchange(mocker)
|
||||||
|
|
||||||
|
now = datetime.now(timezone.utc)
|
||||||
|
mocker.patch('freqtrade.plugins.pairlistmanager.PairListManager.whitelist',
|
||||||
|
PropertyMock(return_value=['HULUMULU/USDT', 'XRP/USDT']))
|
||||||
|
mocker.patch('freqtrade.optimize.backtesting.history.load_data')
|
||||||
|
mocker.patch('freqtrade.optimize.backtesting.history.get_timerange', return_value=(now, now))
|
||||||
|
backtesting = Backtesting(deepcopy(freqai_conf))
|
||||||
|
backtesting.load_bt_data()
|
||||||
|
|
||||||
|
assert log_has_re('Increasing startup_candle_count for freqai to.*', caplog)
|
||||||
|
|
||||||
|
del freqai_conf['freqai']['startup_candles']
|
||||||
|
backtesting = Backtesting(freqai_conf)
|
||||||
|
with pytest.raises(OperationalException,
|
||||||
|
match=r'FreqAI backtesting module.*startup_candles in config.'):
|
||||||
|
backtesting.load_bt_data()
|
||||||
|
|
||||||
|
Backtesting.cleanup()
|
@ -1,33 +0,0 @@
|
|||||||
from pathlib import Path
|
|
||||||
from unittest.mock import PropertyMock
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
from freqtrade.commands.optimize_commands import start_backtesting
|
|
||||||
from freqtrade.exceptions import OperationalException
|
|
||||||
from tests.conftest import (CURRENT_TEST_STRATEGY, get_args, patch_exchange,
|
|
||||||
patched_configuration_load_config_file)
|
|
||||||
|
|
||||||
|
|
||||||
def test_backtest_start_backtest_list_freqai(freqai_conf, mocker, testdatadir):
|
|
||||||
# Tests detail-data loading
|
|
||||||
patch_exchange(mocker)
|
|
||||||
|
|
||||||
mocker.patch('freqtrade.plugins.pairlistmanager.PairListManager.whitelist',
|
|
||||||
PropertyMock(return_value=['HULUMULU/USDT', 'XRP/USDT']))
|
|
||||||
# mocker.patch('freqtrade.optimize.backtesting.Backtesting.backtest', backtestmock)
|
|
||||||
|
|
||||||
patched_configuration_load_config_file(mocker, freqai_conf)
|
|
||||||
|
|
||||||
args = [
|
|
||||||
'backtesting',
|
|
||||||
'--config', 'config.json',
|
|
||||||
'--datadir', str(testdatadir),
|
|
||||||
'--strategy-path', str(Path(__file__).parents[1] / 'strategy/strats'),
|
|
||||||
'--timeframe', '1h',
|
|
||||||
'--strategy-list', CURRENT_TEST_STRATEGY
|
|
||||||
]
|
|
||||||
args = get_args(args)
|
|
||||||
with pytest.raises(OperationalException,
|
|
||||||
match=r"You can't use strategy_list and freqai at the same time\."):
|
|
||||||
start_backtesting(args)
|
|
@ -71,6 +71,7 @@ def test_train_model_in_series_LightGBMMultiModel(mocker, freqai_conf):
|
|||||||
assert Path(freqai.dk.data_path / f"{freqai.dk.model_filename}_metadata.json").is_file()
|
assert Path(freqai.dk.data_path / f"{freqai.dk.model_filename}_metadata.json").is_file()
|
||||||
assert Path(freqai.dk.data_path / f"{freqai.dk.model_filename}_trained_df.pkl").is_file()
|
assert Path(freqai.dk.data_path / f"{freqai.dk.model_filename}_trained_df.pkl").is_file()
|
||||||
assert Path(freqai.dk.data_path / f"{freqai.dk.model_filename}_svm_model.joblib").is_file()
|
assert Path(freqai.dk.data_path / f"{freqai.dk.model_filename}_svm_model.joblib").is_file()
|
||||||
|
assert len(freqai.dk.data['training_features_list']) == 26
|
||||||
|
|
||||||
shutil.rmtree(Path(freqai.dk.full_path))
|
shutil.rmtree(Path(freqai.dk.full_path))
|
||||||
|
|
||||||
|
@ -12,7 +12,7 @@ from freqtrade.constants import AVAILABLE_PAIRLISTS
|
|||||||
from freqtrade.enums import CandleType, RunMode
|
from freqtrade.enums import CandleType, RunMode
|
||||||
from freqtrade.exceptions import OperationalException
|
from freqtrade.exceptions import OperationalException
|
||||||
from freqtrade.persistence import Trade
|
from freqtrade.persistence import Trade
|
||||||
from freqtrade.plugins.pairlist.pairlist_helpers import expand_pairlist
|
from freqtrade.plugins.pairlist.pairlist_helpers import dynamic_expand_pairlist, expand_pairlist
|
||||||
from freqtrade.plugins.pairlistmanager import PairListManager
|
from freqtrade.plugins.pairlistmanager import PairListManager
|
||||||
from freqtrade.resolvers import PairListResolver
|
from freqtrade.resolvers import PairListResolver
|
||||||
from tests.conftest import (create_mock_trades_usdt, get_patched_exchange, get_patched_freqtradebot,
|
from tests.conftest import (create_mock_trades_usdt, get_patched_exchange, get_patched_freqtradebot,
|
||||||
@ -1282,6 +1282,22 @@ def test_expand_pairlist(wildcardlist, pairs, expected):
|
|||||||
expand_pairlist(wildcardlist, pairs)
|
expand_pairlist(wildcardlist, pairs)
|
||||||
else:
|
else:
|
||||||
assert sorted(expand_pairlist(wildcardlist, pairs)) == sorted(expected)
|
assert sorted(expand_pairlist(wildcardlist, pairs)) == sorted(expected)
|
||||||
|
conf = {
|
||||||
|
'pairs': wildcardlist,
|
||||||
|
'freqai': {
|
||||||
|
"enabled": True,
|
||||||
|
"feature_parameters": {
|
||||||
|
"include_corr_pairlist": [
|
||||||
|
"BTC/USDT:USDT",
|
||||||
|
"XRP/BUSD",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
assert sorted(dynamic_expand_pairlist(conf, pairs)) == sorted(expected + [
|
||||||
|
"BTC/USDT:USDT",
|
||||||
|
"XRP/BUSD",
|
||||||
|
])
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize('wildcardlist,pairs,expected', [
|
@pytest.mark.parametrize('wildcardlist,pairs,expected', [
|
||||||
|
@ -1458,6 +1458,27 @@ def test_whitelist_static(default_conf, update, mocker) -> None:
|
|||||||
assert ("Using whitelist `['StaticPairList']` with 4 pairs\n"
|
assert ("Using whitelist `['StaticPairList']` with 4 pairs\n"
|
||||||
"`ETH/BTC, LTC/BTC, XRP/BTC, NEO/BTC`" in msg_mock.call_args_list[0][0][0])
|
"`ETH/BTC, LTC/BTC, XRP/BTC, NEO/BTC`" in msg_mock.call_args_list[0][0][0])
|
||||||
|
|
||||||
|
context = MagicMock()
|
||||||
|
context.args = ['sorted']
|
||||||
|
msg_mock.reset_mock()
|
||||||
|
telegram._whitelist(update=update, context=context)
|
||||||
|
assert ("Using whitelist `['StaticPairList']` with 4 pairs\n"
|
||||||
|
"`ETH/BTC, LTC/BTC, NEO/BTC, XRP/BTC`" in msg_mock.call_args_list[0][0][0])
|
||||||
|
|
||||||
|
context = MagicMock()
|
||||||
|
context.args = ['baseonly']
|
||||||
|
msg_mock.reset_mock()
|
||||||
|
telegram._whitelist(update=update, context=context)
|
||||||
|
assert ("Using whitelist `['StaticPairList']` with 4 pairs\n"
|
||||||
|
"`ETH, LTC, XRP, NEO`" in msg_mock.call_args_list[0][0][0])
|
||||||
|
|
||||||
|
context = MagicMock()
|
||||||
|
context.args = ['baseonly', 'sorted']
|
||||||
|
msg_mock.reset_mock()
|
||||||
|
telegram._whitelist(update=update, context=context)
|
||||||
|
assert ("Using whitelist `['StaticPairList']` with 4 pairs\n"
|
||||||
|
"`ETH, LTC, NEO, XRP`" in msg_mock.call_args_list[0][0][0])
|
||||||
|
|
||||||
|
|
||||||
def test_whitelist_dynamic(default_conf, update, mocker) -> None:
|
def test_whitelist_dynamic(default_conf, update, mocker) -> None:
|
||||||
mocker.patch('freqtrade.exchange.Exchange.exchange_has', MagicMock(return_value=True))
|
mocker.patch('freqtrade.exchange.Exchange.exchange_has', MagicMock(return_value=True))
|
||||||
@ -1471,6 +1492,27 @@ def test_whitelist_dynamic(default_conf, update, mocker) -> None:
|
|||||||
assert ("Using whitelist `['VolumePairList']` with 4 pairs\n"
|
assert ("Using whitelist `['VolumePairList']` with 4 pairs\n"
|
||||||
"`ETH/BTC, LTC/BTC, XRP/BTC, NEO/BTC`" in msg_mock.call_args_list[0][0][0])
|
"`ETH/BTC, LTC/BTC, XRP/BTC, NEO/BTC`" in msg_mock.call_args_list[0][0][0])
|
||||||
|
|
||||||
|
context = MagicMock()
|
||||||
|
context.args = ['sorted']
|
||||||
|
msg_mock.reset_mock()
|
||||||
|
telegram._whitelist(update=update, context=context)
|
||||||
|
assert ("Using whitelist `['VolumePairList']` with 4 pairs\n"
|
||||||
|
"`ETH/BTC, LTC/BTC, NEO/BTC, XRP/BTC`" in msg_mock.call_args_list[0][0][0])
|
||||||
|
|
||||||
|
context = MagicMock()
|
||||||
|
context.args = ['baseonly']
|
||||||
|
msg_mock.reset_mock()
|
||||||
|
telegram._whitelist(update=update, context=context)
|
||||||
|
assert ("Using whitelist `['VolumePairList']` with 4 pairs\n"
|
||||||
|
"`ETH, LTC, XRP, NEO`" in msg_mock.call_args_list[0][0][0])
|
||||||
|
|
||||||
|
context = MagicMock()
|
||||||
|
context.args = ['baseonly', 'sorted']
|
||||||
|
msg_mock.reset_mock()
|
||||||
|
telegram._whitelist(update=update, context=context)
|
||||||
|
assert ("Using whitelist `['VolumePairList']` with 4 pairs\n"
|
||||||
|
"`ETH, LTC, NEO, XRP`" in msg_mock.call_args_list[0][0][0])
|
||||||
|
|
||||||
|
|
||||||
def test_blacklist_static(default_conf, update, mocker) -> None:
|
def test_blacklist_static(default_conf, update, mocker) -> None:
|
||||||
|
|
||||||
|
@ -63,48 +63,47 @@ class freqai_test_classifier(IStrategy):
|
|||||||
|
|
||||||
coin = pair.split('/')[0]
|
coin = pair.split('/')[0]
|
||||||
|
|
||||||
with self.freqai.lock:
|
if informative is None:
|
||||||
if informative is None:
|
informative = self.dp.get_pair_dataframe(pair, tf)
|
||||||
informative = self.dp.get_pair_dataframe(pair, tf)
|
|
||||||
|
|
||||||
# first loop is automatically duplicating indicators for time periods
|
# first loop is automatically duplicating indicators for time periods
|
||||||
for t in self.freqai_info["feature_parameters"]["indicator_periods_candles"]:
|
for t in self.freqai_info["feature_parameters"]["indicator_periods_candles"]:
|
||||||
|
|
||||||
t = int(t)
|
t = int(t)
|
||||||
informative[f"%-{coin}rsi-period_{t}"] = ta.RSI(informative, timeperiod=t)
|
informative[f"%-{coin}rsi-period_{t}"] = ta.RSI(informative, timeperiod=t)
|
||||||
informative[f"%-{coin}mfi-period_{t}"] = ta.MFI(informative, timeperiod=t)
|
informative[f"%-{coin}mfi-period_{t}"] = ta.MFI(informative, timeperiod=t)
|
||||||
informative[f"%-{coin}adx-period_{t}"] = ta.ADX(informative, window=t)
|
informative[f"%-{coin}adx-period_{t}"] = ta.ADX(informative, window=t)
|
||||||
|
|
||||||
informative[f"%-{coin}pct-change"] = informative["close"].pct_change()
|
informative[f"%-{coin}pct-change"] = informative["close"].pct_change()
|
||||||
informative[f"%-{coin}raw_volume"] = informative["volume"]
|
informative[f"%-{coin}raw_volume"] = informative["volume"]
|
||||||
informative[f"%-{coin}raw_price"] = informative["close"]
|
informative[f"%-{coin}raw_price"] = informative["close"]
|
||||||
|
|
||||||
indicators = [col for col in informative if col.startswith("%")]
|
indicators = [col for col in informative if col.startswith("%")]
|
||||||
# This loop duplicates and shifts all indicators to add a sense of recency to data
|
# This loop duplicates and shifts all indicators to add a sense of recency to data
|
||||||
for n in range(self.freqai_info["feature_parameters"]["include_shifted_candles"] + 1):
|
for n in range(self.freqai_info["feature_parameters"]["include_shifted_candles"] + 1):
|
||||||
if n == 0:
|
if n == 0:
|
||||||
continue
|
continue
|
||||||
informative_shift = informative[indicators].shift(n)
|
informative_shift = informative[indicators].shift(n)
|
||||||
informative_shift = informative_shift.add_suffix("_shift-" + str(n))
|
informative_shift = informative_shift.add_suffix("_shift-" + str(n))
|
||||||
informative = pd.concat((informative, informative_shift), axis=1)
|
informative = pd.concat((informative, informative_shift), axis=1)
|
||||||
|
|
||||||
df = merge_informative_pair(df, informative, self.config["timeframe"], tf, ffill=True)
|
df = merge_informative_pair(df, informative, self.config["timeframe"], tf, ffill=True)
|
||||||
skip_columns = [
|
skip_columns = [
|
||||||
(s + "_" + tf) for s in ["date", "open", "high", "low", "close", "volume"]
|
(s + "_" + tf) for s in ["date", "open", "high", "low", "close", "volume"]
|
||||||
]
|
]
|
||||||
df = df.drop(columns=skip_columns)
|
df = df.drop(columns=skip_columns)
|
||||||
|
|
||||||
# Add generalized indicators here (because in live, it will call this
|
# Add generalized indicators here (because in live, it will call this
|
||||||
# function to populate indicators during training). Notice how we ensure not to
|
# function to populate indicators during training). Notice how we ensure not to
|
||||||
# add them multiple times
|
# add them multiple times
|
||||||
if set_generalized_indicators:
|
if set_generalized_indicators:
|
||||||
df["%-day_of_week"] = (df["date"].dt.dayofweek + 1) / 7
|
df["%-day_of_week"] = (df["date"].dt.dayofweek + 1) / 7
|
||||||
df["%-hour_of_day"] = (df["date"].dt.hour + 1) / 25
|
df["%-hour_of_day"] = (df["date"].dt.hour + 1) / 25
|
||||||
|
|
||||||
# user adds targets here by prepending them with &- (see convention below)
|
# user adds targets here by prepending them with &- (see convention below)
|
||||||
# If user wishes to use multiple targets, a multioutput prediction model
|
# If user wishes to use multiple targets, a multioutput prediction model
|
||||||
# needs to be used such as templates/CatboostPredictionMultiModel.py
|
# needs to be used such as templates/CatboostPredictionMultiModel.py
|
||||||
df['&s-up_or_down'] = np.where(df["close"].shift(-100) > df["close"], 'up', 'down')
|
df['&s-up_or_down'] = np.where(df["close"].shift(-100) > df["close"], 'up', 'down')
|
||||||
|
|
||||||
return df
|
return df
|
||||||
|
|
||||||
|
@ -62,67 +62,66 @@ class freqai_test_multimodel_strat(IStrategy):
|
|||||||
|
|
||||||
coin = pair.split('/')[0]
|
coin = pair.split('/')[0]
|
||||||
|
|
||||||
with self.freqai.lock:
|
if informative is None:
|
||||||
if informative is None:
|
informative = self.dp.get_pair_dataframe(pair, tf)
|
||||||
informative = self.dp.get_pair_dataframe(pair, tf)
|
|
||||||
|
|
||||||
# first loop is automatically duplicating indicators for time periods
|
# first loop is automatically duplicating indicators for time periods
|
||||||
for t in self.freqai_info["feature_parameters"]["indicator_periods_candles"]:
|
for t in self.freqai_info["feature_parameters"]["indicator_periods_candles"]:
|
||||||
|
|
||||||
t = int(t)
|
t = int(t)
|
||||||
informative[f"%-{coin}rsi-period_{t}"] = ta.RSI(informative, timeperiod=t)
|
informative[f"%-{coin}rsi-period_{t}"] = ta.RSI(informative, timeperiod=t)
|
||||||
informative[f"%-{coin}mfi-period_{t}"] = ta.MFI(informative, timeperiod=t)
|
informative[f"%-{coin}mfi-period_{t}"] = ta.MFI(informative, timeperiod=t)
|
||||||
informative[f"%-{coin}adx-period_{t}"] = ta.ADX(informative, window=t)
|
informative[f"%-{coin}adx-period_{t}"] = ta.ADX(informative, window=t)
|
||||||
|
|
||||||
informative[f"%-{coin}pct-change"] = informative["close"].pct_change()
|
informative[f"%-{coin}pct-change"] = informative["close"].pct_change()
|
||||||
informative[f"%-{coin}raw_volume"] = informative["volume"]
|
informative[f"%-{coin}raw_volume"] = informative["volume"]
|
||||||
informative[f"%-{coin}raw_price"] = informative["close"]
|
informative[f"%-{coin}raw_price"] = informative["close"]
|
||||||
|
|
||||||
indicators = [col for col in informative if col.startswith("%")]
|
indicators = [col for col in informative if col.startswith("%")]
|
||||||
# This loop duplicates and shifts all indicators to add a sense of recency to data
|
# This loop duplicates and shifts all indicators to add a sense of recency to data
|
||||||
for n in range(self.freqai_info["feature_parameters"]["include_shifted_candles"] + 1):
|
for n in range(self.freqai_info["feature_parameters"]["include_shifted_candles"] + 1):
|
||||||
if n == 0:
|
if n == 0:
|
||||||
continue
|
continue
|
||||||
informative_shift = informative[indicators].shift(n)
|
informative_shift = informative[indicators].shift(n)
|
||||||
informative_shift = informative_shift.add_suffix("_shift-" + str(n))
|
informative_shift = informative_shift.add_suffix("_shift-" + str(n))
|
||||||
informative = pd.concat((informative, informative_shift), axis=1)
|
informative = pd.concat((informative, informative_shift), axis=1)
|
||||||
|
|
||||||
df = merge_informative_pair(df, informative, self.config["timeframe"], tf, ffill=True)
|
df = merge_informative_pair(df, informative, self.config["timeframe"], tf, ffill=True)
|
||||||
skip_columns = [
|
skip_columns = [
|
||||||
(s + "_" + tf) for s in ["date", "open", "high", "low", "close", "volume"]
|
(s + "_" + tf) for s in ["date", "open", "high", "low", "close", "volume"]
|
||||||
]
|
]
|
||||||
df = df.drop(columns=skip_columns)
|
df = df.drop(columns=skip_columns)
|
||||||
|
|
||||||
# Add generalized indicators here (because in live, it will call this
|
# Add generalized indicators here (because in live, it will call this
|
||||||
# function to populate indicators during training). Notice how we ensure not to
|
# function to populate indicators during training). Notice how we ensure not to
|
||||||
# add them multiple times
|
# add them multiple times
|
||||||
if set_generalized_indicators:
|
if set_generalized_indicators:
|
||||||
df["%-day_of_week"] = (df["date"].dt.dayofweek + 1) / 7
|
df["%-day_of_week"] = (df["date"].dt.dayofweek + 1) / 7
|
||||||
df["%-hour_of_day"] = (df["date"].dt.hour + 1) / 25
|
df["%-hour_of_day"] = (df["date"].dt.hour + 1) / 25
|
||||||
|
|
||||||
# user adds targets here by prepending them with &- (see convention below)
|
# user adds targets here by prepending them with &- (see convention below)
|
||||||
# If user wishes to use multiple targets, a multioutput prediction model
|
# If user wishes to use multiple targets, a multioutput prediction model
|
||||||
# needs to be used such as templates/CatboostPredictionMultiModel.py
|
# needs to be used such as templates/CatboostPredictionMultiModel.py
|
||||||
df["&-s_close"] = (
|
df["&-s_close"] = (
|
||||||
df["close"]
|
df["close"]
|
||||||
.shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
.shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
.rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
.rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
.mean()
|
.mean()
|
||||||
/ df["close"]
|
/ df["close"]
|
||||||
- 1
|
- 1
|
||||||
)
|
)
|
||||||
|
|
||||||
df["&-s_range"] = (
|
df["&-s_range"] = (
|
||||||
df["close"]
|
df["close"]
|
||||||
.shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
.shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
.rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
.rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
.max()
|
.max()
|
||||||
-
|
-
|
||||||
df["close"]
|
df["close"]
|
||||||
.shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
.shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
.rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
.rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
.min()
|
.min()
|
||||||
)
|
)
|
||||||
|
|
||||||
return df
|
return df
|
||||||
|
|
||||||
|
@ -62,55 +62,54 @@ class freqai_test_strat(IStrategy):
|
|||||||
|
|
||||||
coin = pair.split('/')[0]
|
coin = pair.split('/')[0]
|
||||||
|
|
||||||
with self.freqai.lock:
|
if informative is None:
|
||||||
if informative is None:
|
informative = self.dp.get_pair_dataframe(pair, tf)
|
||||||
informative = self.dp.get_pair_dataframe(pair, tf)
|
|
||||||
|
|
||||||
# first loop is automatically duplicating indicators for time periods
|
# first loop is automatically duplicating indicators for time periods
|
||||||
for t in self.freqai_info["feature_parameters"]["indicator_periods_candles"]:
|
for t in self.freqai_info["feature_parameters"]["indicator_periods_candles"]:
|
||||||
|
|
||||||
t = int(t)
|
t = int(t)
|
||||||
informative[f"%-{coin}rsi-period_{t}"] = ta.RSI(informative, timeperiod=t)
|
informative[f"%-{coin}rsi-period_{t}"] = ta.RSI(informative, timeperiod=t)
|
||||||
informative[f"%-{coin}mfi-period_{t}"] = ta.MFI(informative, timeperiod=t)
|
informative[f"%-{coin}mfi-period_{t}"] = ta.MFI(informative, timeperiod=t)
|
||||||
informative[f"%-{coin}adx-period_{t}"] = ta.ADX(informative, window=t)
|
informative[f"%-{coin}adx-period_{t}"] = ta.ADX(informative, window=t)
|
||||||
|
|
||||||
informative[f"%-{coin}pct-change"] = informative["close"].pct_change()
|
informative[f"%-{coin}pct-change"] = informative["close"].pct_change()
|
||||||
informative[f"%-{coin}raw_volume"] = informative["volume"]
|
informative[f"%-{coin}raw_volume"] = informative["volume"]
|
||||||
informative[f"%-{coin}raw_price"] = informative["close"]
|
informative[f"%-{coin}raw_price"] = informative["close"]
|
||||||
|
|
||||||
indicators = [col for col in informative if col.startswith("%")]
|
indicators = [col for col in informative if col.startswith("%")]
|
||||||
# This loop duplicates and shifts all indicators to add a sense of recency to data
|
# This loop duplicates and shifts all indicators to add a sense of recency to data
|
||||||
for n in range(self.freqai_info["feature_parameters"]["include_shifted_candles"] + 1):
|
for n in range(self.freqai_info["feature_parameters"]["include_shifted_candles"] + 1):
|
||||||
if n == 0:
|
if n == 0:
|
||||||
continue
|
continue
|
||||||
informative_shift = informative[indicators].shift(n)
|
informative_shift = informative[indicators].shift(n)
|
||||||
informative_shift = informative_shift.add_suffix("_shift-" + str(n))
|
informative_shift = informative_shift.add_suffix("_shift-" + str(n))
|
||||||
informative = pd.concat((informative, informative_shift), axis=1)
|
informative = pd.concat((informative, informative_shift), axis=1)
|
||||||
|
|
||||||
df = merge_informative_pair(df, informative, self.config["timeframe"], tf, ffill=True)
|
df = merge_informative_pair(df, informative, self.config["timeframe"], tf, ffill=True)
|
||||||
skip_columns = [
|
skip_columns = [
|
||||||
(s + "_" + tf) for s in ["date", "open", "high", "low", "close", "volume"]
|
(s + "_" + tf) for s in ["date", "open", "high", "low", "close", "volume"]
|
||||||
]
|
]
|
||||||
df = df.drop(columns=skip_columns)
|
df = df.drop(columns=skip_columns)
|
||||||
|
|
||||||
# Add generalized indicators here (because in live, it will call this
|
# Add generalized indicators here (because in live, it will call this
|
||||||
# function to populate indicators during training). Notice how we ensure not to
|
# function to populate indicators during training). Notice how we ensure not to
|
||||||
# add them multiple times
|
# add them multiple times
|
||||||
if set_generalized_indicators:
|
if set_generalized_indicators:
|
||||||
df["%-day_of_week"] = (df["date"].dt.dayofweek + 1) / 7
|
df["%-day_of_week"] = (df["date"].dt.dayofweek + 1) / 7
|
||||||
df["%-hour_of_day"] = (df["date"].dt.hour + 1) / 25
|
df["%-hour_of_day"] = (df["date"].dt.hour + 1) / 25
|
||||||
|
|
||||||
# user adds targets here by prepending them with &- (see convention below)
|
# user adds targets here by prepending them with &- (see convention below)
|
||||||
# If user wishes to use multiple targets, a multioutput prediction model
|
# If user wishes to use multiple targets, a multioutput prediction model
|
||||||
# needs to be used such as templates/CatboostPredictionMultiModel.py
|
# needs to be used such as templates/CatboostPredictionMultiModel.py
|
||||||
df["&-s_close"] = (
|
df["&-s_close"] = (
|
||||||
df["close"]
|
df["close"]
|
||||||
.shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
.shift(-self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
.rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
.rolling(self.freqai_info["feature_parameters"]["label_period_candles"])
|
||||||
.mean()
|
.mean()
|
||||||
/ df["close"]
|
/ df["close"]
|
||||||
- 1
|
- 1
|
||||||
)
|
)
|
||||||
|
|
||||||
return df
|
return df
|
||||||
|
|
||||||
|
@ -3451,7 +3451,7 @@ def test_execute_trade_exit_down_stoploss_on_exchange_dry_run(
|
|||||||
|
|
||||||
trade.stop_loss = 2.0 * 1.01 if is_short else 2.0 * 0.99
|
trade.stop_loss = 2.0 * 1.01 if is_short else 2.0 * 0.99
|
||||||
freqtrade.execute_trade_exit(
|
freqtrade.execute_trade_exit(
|
||||||
trade=trade, limit=(ticker_usdt_sell_up if is_short else ticker_usdt_sell_down())['bid'],
|
trade=trade, limit=trade.stop_loss,
|
||||||
exit_check=ExitCheckTuple(exit_type=ExitType.STOP_LOSS))
|
exit_check=ExitCheckTuple(exit_type=ExitType.STOP_LOSS))
|
||||||
|
|
||||||
assert rpc_mock.call_count == 2
|
assert rpc_mock.call_count == 2
|
||||||
|
Loading…
Reference in New Issue
Block a user