Commit Graph

39 Commits

Author SHA1 Message Date
robcaulk
d9dc831772 allow user to drop ohlc from features in RL 2023-03-07 11:33:54 +01:00
Matthias
48ecc7f6dc Update freqai-reinforcement-learning docs
closes #8199
2023-02-21 19:55:32 +01:00
thorntwig
35fe37199d fix minor typos 2023-02-16 20:04:42 +01:00
robcaulk
8873a565ee expose raw features to the environment for use in calculate_reward 2023-02-10 15:48:18 +01:00
robcaulk
4fc0edb8b7 add pair to environment for access inside calculate_reward 2023-02-10 14:45:50 +01:00
Matthias
8647c0192c Fix typo 2023-01-26 07:08:38 +01:00
Matthias
2333dbae40 Update reinforcement learning docs to use correct naming 2023-01-26 07:07:49 +01:00
Robert Caulk
bfd7803fd8
Update freqai-reinforcement-learning.md 2023-01-12 22:18:22 +01:00
robcaulk
c2936d551b improve doc, update test strats, change function names 2022-12-28 13:25:40 +01:00
robcaulk
5b9e3af276 improve wording 2022-12-19 12:22:15 +01:00
robcaulk
5405d8fa6f add discussion and tips for Base3ActionRLEnvironment 2022-12-19 12:14:53 +01:00
initrv
f940280d5e Fix tensorboard_log incrementing note 2022-12-12 14:35:44 +03:00
robcaulk
0fd8e214e4 add documentation for tensorboard_log, change how users interact with tensorboard_log 2022-12-11 15:31:29 +01:00
smarmau
f7b4fc5bbc
Update freqai-reinforcement-learning.md
Change typo of default Tensorboard port to reflect correct port (6006)
2022-12-04 22:22:23 +11:00
Emre
f21dbbd8bb
Update imports of custom model 2022-11-28 00:06:02 +03:00
robcaulk
67d9469277 small wording fix 2022-11-27 20:42:04 +01:00
Emre
a02da08065
Fix typo 2022-11-27 22:23:00 +03:00
Emre
5b5859238b
Fix typo 2022-11-27 22:06:14 +03:00
Emre
fe00a65163
FIx custom reward link 2022-11-27 21:34:07 +03:00
robcaulk
aaaa5a5f64 add documentation for net_arch, other small changes 2022-11-26 13:44:58 +01:00
Matthias
8660ac9aa0 Fix import in docs 2022-11-26 13:12:44 +01:00
robcaulk
81fd2e588f ensure typing, remove unsued code 2022-11-26 12:11:59 +01:00
robcaulk
9f13d99b99 improve parameter table, add better documentation for custom calculate_reward, add various helpful notes in docstrings etc 2022-11-26 11:32:39 +01:00
Matthias
8f1a8c752b Add freqairl docker build process 2022-11-24 07:00:12 +01:00
Matthias
3d26659d5e Fix some doc typos 2022-11-23 20:09:55 +01:00
robcaulk
d02da279f8 document the simplifications of the training environment 2022-11-19 13:20:20 +01:00
robcaulk
60fcd8dce2 fix skipped mac test, fix RL bug in add_state_info, fix use of __import__, revise doc 2022-11-17 21:50:02 +01:00
robcaulk
c8d3e57712 add note that these environments are designed for short-long bots only. 2022-11-13 17:30:56 +01:00
robcaulk
c76afc255a explain how to choose environments, and how to customize them 2022-11-13 17:26:11 +01:00
robcaulk
90f168d1ff remove more user references. cleanup dataprovider 2022-11-13 17:06:06 +01:00
robcaulk
f8f553ec14 remove references to "the user" 2022-11-13 16:58:36 +01:00
robcaulk
388ca21200 update docs, fix bug in environment 2022-11-13 16:56:31 +01:00
robcaulk
9c6b97c678 ensure normalization acceleration methods are employed in RL 2022-11-12 12:01:59 +01:00
robcaulk
e5204101d9 add tensorboard back to reqs to keep default integration working (and for docker) 2022-10-05 21:34:10 +02:00
robcaulk
ab4705efd2 provide background and goals for RL in doc 2022-10-05 16:39:38 +02:00
robcaulk
936ca24482 separate RL install from general FAI install, update docs 2022-10-05 15:58:54 +02:00
robcaulk
292d72d593 automatically handle model_save_type for user 2022-10-03 18:42:20 +02:00
robcaulk
cf882fa84e fix tests 2022-10-01 20:26:41 +02:00
robcaulk
ab9d781b06 add reinforcement learning page to docs 2022-10-01 17:50:05 +02:00