add note that these environments are designed for short-long bots only.

This commit is contained in:
robcaulk 2022-11-13 17:30:56 +01:00
parent c76afc255a
commit c8d3e57712

View File

@ -224,3 +224,6 @@ FreqAI provides two base environments, `Base4ActionEnvironment` and `Base5Action
* the actions consumed by the user strategy
Both of the FreqAI provided environments inherit from an action/position agnostic environment object called the `BaseEnvironment`, which contains all shared logic. The architecture is designed to be easily customized. The simplest customization is the `calculate_reward()` (see details [here](#creating-the-reward)). However, the customizations can be further extended into any of the functions inside the environment. You can do this by simply overriding those functions inside your `MyRLEnv` in the prediction model file. Or for more advanced customizations, it is encouraged to create an entirely new environment inherited from `BaseEnvironment`.
!!! Note
FreqAI does not provide by default, a long-only training environment. However, creating one should be as simple as copy-pasting one of the built in environments and removing the `short` actions (and all associated references to those).