From 1a8e1362a12e9b09ac481d711a63101cb81663fe Mon Sep 17 00:00:00 2001
From: Richard Jozsa <38407205+richardjozsa@users.noreply.github.com>
Date: Mon, 29 Aug 2022 11:15:06 +0200
Subject: [PATCH] There was an error in the docs around continual learning and
thread count (#7314)
* Error in the docs
---
docs/freqai.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/docs/freqai.md b/docs/freqai.md
index eb76850b6..140e8acf9 100644
--- a/docs/freqai.md
+++ b/docs/freqai.md
@@ -129,8 +129,8 @@ Mandatory parameters are marked as **Required**, which means that they are requi
| `max_trade_duration_candles`| Guides the agent training to keep trades below desired length. Example usage shown in `prediction_models/ReinforcementLearner.py` within the user customizable `calculate_reward()`
**Datatype:** int.
| `model_type` | Model string from stable_baselines3 or SBcontrib. Available strings include: `'TRPO', 'ARS', 'RecurrentPPO', 'MaskablePPO', 'PPO', 'A2C', 'DQN'`. User should ensure that `model_training_parameters` match those available to the corresponding stable_baselines3 model by visiting their documentaiton. [PPO doc](https://stable-baselines3.readthedocs.io/en/master/modules/ppo.html) (external website)
**Datatype:** string.
| `policy_type` | One of the available policy types from stable_baselines3
**Datatype:** string.
-| `continual_learning` | Number of threads to dedicate to the Reinforcement Learning training process.
**Datatype:** int.
-| `thread_count` | If true, the agent will start new trainings from the model selected during the previous training. If false, a new agent is trained from scratch for each training.
**Datatype:** Bool.
+| `continual_learning` | If true, the agent will start new trainings from the model selected during the previous training. If false, a new agent is trained from scratch for each training.
**Datatype:** Bool.
+| `thread_count` | Number of threads to dedicate to the Reinforcement Learning training process.
**Datatype:** int.
| `model_reward_parameters` | Parameters used inside the user customizable `calculate_reward()` function in `ReinforcementLearner.py`
**Datatype:** int.
| | **Extraneous parameters**
| `keras` | If your model makes use of keras (typical of Tensorflow based prediction models), activate this flag so that the model save/loading follows keras standards. Default value `false`
**Datatype:** boolean.