diff --git a/docs/freqai-parameter-table.md b/docs/freqai-parameter-table.md index 275062a33..122d87459 100644 --- a/docs/freqai-parameter-table.md +++ b/docs/freqai-parameter-table.md @@ -85,6 +85,26 @@ Mandatory parameters are marked as **Required** and have to be set in one of the | `net_arch` | Network architecture which is well described in [`stable_baselines3` doc](https://stable-baselines3.readthedocs.io/en/master/guide/custom_policy.html#examples). In summary: `[, dict(vf=[], pi=[])]`. By default this is set to `[128, 128]`, which defines 2 shared hidden layers with 128 units each. | `randomize_starting_position` | Randomize the starting point of each episode to avoid overfitting.
**Datatype:** bool.
Default: `False`. +### PyTorch parameters + +#### general + +| Parameter | Description | +|------------|-------------| +| | **Model training parameters within the freqai.model_training_parameters sub dictionary** +| `learning_rate` | learning rate to be passed to the optimizer.
**Datatype:** float.
Default: `3e-4`. +| `model_kwargs` | paramters to be passed to the model class.
**Datatype:** dict.
Default: `{}`. +| `trainer_kwargs` | paramters to be passed to the trainer class.
**Datatype:** dict.
Default: `{}`. + +#### trainer_kwargs + +| Parameter | Description | +|------------|-------------| +| `max_iters` | The number of training iterations to run. iteration here refers to the number of times we call self.optimizer.step(). used to calculate n_epochs.
**Datatype:** int.
Default: `100`. +| `batch_size` | The size of the batches to use during training..
**Datatype:** int.
Default: `64`. +| `max_n_eval_batches` | The maximum number batches to use for evaluation..
**Datatype:** int, optional.
Default: `None`. + + ### Additional parameters | Parameter | Description |