robcaulk
|
4b9499e321
|
improve nomenclature and fix short exit bug
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
4baa36bdcf
|
fix persist a single training environment for PPO
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
f95602f6bd
|
persist a single training environment.
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
5d4e5e69fe
|
reinforce training with state info, reinforce prediction with state info, restructure config to accommodate all parameters from any user imported model type. Set 5Act to default env on TDQN. Clean example config.
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
7962a1439b
|
remove keep low profit
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
81b5aa66e8
|
make env keep current position when low profit
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
45218faeb0
|
fix coding convention
|
2022-08-24 13:00:55 +02:00 |
|
richardjozsa
|
d55092ff17
|
Docker building update, and TDQN repair with the newer release of SB+
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
74e4fd0633
|
ensure config example can work with backtesting RL
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
b90da46b1b
|
improve price df handling to enable backtesting
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
2080ff86ed
|
5ac base fixes in logic
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
16cec7dfbd
|
fix save/reload functionality for stablebaselines
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
0475b7cb18
|
remove unuse code and fix coding conventions
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
d60a166fbf
|
multiproc TDQN with xtra callbacks
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
dd382dd370
|
add monitor to eval env so that multiproc can save best_model
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
69d542d3e2
|
match config and strats to upstream freqai
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
e5df39e891
|
ensuring best_model is placed in ram and saved to disk and loaded from disk
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
bf7ceba958
|
set cpu threads in config
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
57c488a6f1
|
learning_rate + multicpu changes
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
48bb51b458
|
example config added
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
b1fc5a06ca
|
example config added
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
6d8e838a8f
|
update tensorboard dependency
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
acf3484e88
|
add multiprocessing variant of ReinforcementLearningPPO
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
cf0731095f
|
type fix
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
1c81ec6016
|
3ac and 5ac example strategies
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
13cd18dc9a
|
PPO policy change + verbose=1
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
926023935f
|
make base 3ac and base 5ac environments. TDQN defaults to 3AC.
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
096533bcb9
|
3ac to 5ac
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
718c9d0440
|
action fix
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
9c78e6c26f
|
base PPO model only customizes reward for 3AC
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
6048f60f13
|
get TDQN working with 5 action environment
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
d4db5c3281
|
ensure TDQN class is properly named
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
91683e1dca
|
restructure RL so that user can customize environment
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
ecd1f55abc
|
add rl module
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
70b25461f0
|
add rl dependency
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
9b895500b3
|
initial commit - new dev branch
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
cd3fe44424
|
callback function and TDQN model added
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
01232e9a1f
|
callback function and TDQN model added
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
8eeaab2746
|
add reward function
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
ec813434f5
|
ReinforcementLearningModel
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
2f4d73eb06
|
Revert "ReinforcementLearningModel"
This reverts commit 4d8dfe1ff1daa47276eda77118ddf39c13512a85.
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
c1e7db3130
|
ReinforcementLearningModel
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
05ed1b544f
|
Working base for reinforcement learning model
|
2022-08-24 13:00:40 +02:00 |
|
Matthias
|
a6d78a8615
|
initialize Since parameter properly
closes #7285
|
2022-08-23 06:43:04 +02:00 |
|
Matthias
|
fe7108ae75
|
Convert amount to contracts before comparing for close
closes #7279
|
2022-08-23 06:37:38 +02:00 |
|
Matthias
|
78b161e14c
|
add contract_size to database
|
2022-08-23 06:37:38 +02:00 |
|
Matthias
|
6036018f35
|
Extract contracts_to_amount and amount_to_contracts to standalone functions
|
2022-08-23 06:37:38 +02:00 |
|
Matthias
|
1b0f37a93c
|
Fix documentation typo
|
2022-08-23 06:37:38 +02:00 |
|
Matthias
|
5f38a574ce
|
Add okx broker id
|
2022-08-23 06:37:38 +02:00 |
|
th0rntwig
|
5ce1c69803
|
Improve DBSCAN epsilon identification (#7269)
* Improve DBSCAN epsilon identification
|
2022-08-22 19:57:20 +02:00 |
|