Fix import in docs

This commit is contained in:
Matthias 2022-11-26 13:12:44 +01:00
parent cf2f12b472
commit 8660ac9aa0
1 changed files with 2 additions and 1 deletions

View File

@ -165,7 +165,8 @@ Parameter details can be found [here](freqai-parameter-table.md), but in general
As you begin to modify the strategy and the prediction model, you will quickly realize some important differences between the Reinforcement Learner and the Regressors/Classifiers. Firstly, the strategy does not set a target value (no labels!). Instead, you set the `calculate_reward()` function inside the `MyRLEnv` class (see below). A default `calculate_reward()` is provided inside `prediction_models/ReinforcementLearner.py` to demonstrate the necessary building blocks for creating rewards, but users are encouraged to create their own custom reinforcement learning model class (see below) and save it to `user_data/freqaimodels`. It is inside the `calculate_reward()` where creative theories about the market can be expressed. For example, you can reward your agent when it makes a winning trade, and penalize the agent when it makes a losing trade. Or perhaps, you wish to reward the agent for entering trades, and penalize the agent for sitting in trades too long. Below we show examples of how these rewards are all calculated:
```python
import from freqtrade.freqai.prediction_models ReinforcementLearner import ReinforcementLearner
from freqtrade.freqai.prediction_models.ReinforcementLearner import ReinforcementLearner
from freqtrade.freqai.RL.Base5ActionRLEnv import Base5ActionRLEnv
class MyCoolRLModel(ReinforcementLearner):
"""