improve doc, update test strats, change function names

This commit is contained in:
robcaulk
2022-12-28 13:25:40 +01:00
parent 8227b4aafe
commit c2936d551b
14 changed files with 515 additions and 594 deletions

View File

@@ -34,55 +34,25 @@ Setting up and running a Reinforcement Learning model is the same as running a R
freqtrade trade --freqaimodel ReinforcementLearner --strategy MyRLStrategy --config config.json
```
where `ReinforcementLearner` will use the templated `ReinforcementLearner` from `freqai/prediction_models/ReinforcementLearner` (or a custom user defined one located in `user_data/freqaimodels`). The strategy, on the other hand, follows the same base [feature engineering](freqai-feature-engineering.md) with `populate_any_indicators` as a typical Regressor:
where `ReinforcementLearner` will use the templated `ReinforcementLearner` from `freqai/prediction_models/ReinforcementLearner` (or a custom user defined one located in `user_data/freqaimodels`). The strategy, on the other hand, follows the same base [feature engineering](freqai-feature-engineering.md) with `feature_engineering_*` as a typical Regressor. The difference lies in the creation of the targets, Reinforcement Learning doesnt require them. However, FreqAI requires a default (neutral) value to be set in the action column:
```python
def populate_any_indicators(
self, pair, df, tf, informative=None, set_generalized_indicators=False
):
def set_freqai_targets(self, dataframe, **kwargs):
"""
*Only functional with FreqAI enabled strategies*
Required function to set the targets for the model.
All targets must be prepended with `&` to be recognized by the FreqAI internals.
if informative is None:
informative = self.dp.get_pair_dataframe(pair, tf)
More details about feature engineering available:
# first loop is automatically duplicating indicators for time periods
for t in self.freqai_info["feature_parameters"]["indicator_periods_candles"]:
https://www.freqtrade.io/en/latest/freqai-feature-engineering
t = int(t)
informative[f"%-{pair}rsi-period_{t}"] = ta.RSI(informative, timeperiod=t)
informative[f"%-{pair}mfi-period_{t}"] = ta.MFI(informative, timeperiod=t)
informative[f"%-{pair}adx-period_{t}"] = ta.ADX(informative, window=t)
# The following raw price values are necessary for RL models
informative[f"%-{pair}raw_close"] = informative["close"]
informative[f"%-{pair}raw_open"] = informative["open"]
informative[f"%-{pair}raw_high"] = informative["high"]
informative[f"%-{pair}raw_low"] = informative["low"]
indicators = [col for col in informative if col.startswith("%")]
# This loop duplicates and shifts all indicators to add a sense of recency to data
for n in range(self.freqai_info["feature_parameters"]["include_shifted_candles"] + 1):
if n == 0:
continue
informative_shift = informative[indicators].shift(n)
informative_shift = informative_shift.add_suffix("_shift-" + str(n))
informative = pd.concat((informative, informative_shift), axis=1)
df = merge_informative_pair(df, informative, self.config["timeframe"], tf, ffill=True)
skip_columns = [
(s + "_" + tf) for s in ["date", "open", "high", "low", "close", "volume"]
]
df = df.drop(columns=skip_columns)
# Add generalized indicators here (because in live, it will call this
# function to populate indicators during training). Notice how we ensure not to
# add them multiple times
if set_generalized_indicators:
# For RL, there are no direct targets to set. This is filler (neutral)
# until the agent sends an action.
df["&-action"] = 0
return df
:param df: strategy dataframe which will receive the targets
usage example: dataframe["&-target"] = dataframe["close"].shift(-1) / dataframe["close"]
"""
# For RL, there are no direct targets to set. This is filler (neutral)
# until the agent sends an action.
df["&-action"] = 0
```
Most of the function remains the same as for typical Regressors, however, the function above shows how the strategy must pass the raw price data to the agent so that it has access to raw OHLCV in the training environment: