robcaulk
|
f95602f6bd
|
persist a single training environment.
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
5d4e5e69fe
|
reinforce training with state info, reinforce prediction with state info, restructure config to accommodate all parameters from any user imported model type. Set 5Act to default env on TDQN. Clean example config.
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
7962a1439b
|
remove keep low profit
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
81b5aa66e8
|
make env keep current position when low profit
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
45218faeb0
|
fix coding convention
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
b90da46b1b
|
improve price df handling to enable backtesting
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
2080ff86ed
|
5ac base fixes in logic
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
16cec7dfbd
|
fix save/reload functionality for stablebaselines
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
0475b7cb18
|
remove unuse code and fix coding conventions
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
d60a166fbf
|
multiproc TDQN with xtra callbacks
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
dd382dd370
|
add monitor to eval env so that multiproc can save best_model
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
69d542d3e2
|
match config and strats to upstream freqai
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
e5df39e891
|
ensuring best_model is placed in ram and saved to disk and loaded from disk
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
bf7ceba958
|
set cpu threads in config
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
57c488a6f1
|
learning_rate + multicpu changes
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
acf3484e88
|
add multiprocessing variant of ReinforcementLearningPPO
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
cf0731095f
|
type fix
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
1c81ec6016
|
3ac and 5ac example strategies
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
13cd18dc9a
|
PPO policy change + verbose=1
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
926023935f
|
make base 3ac and base 5ac environments. TDQN defaults to 3AC.
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
096533bcb9
|
3ac to 5ac
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
718c9d0440
|
action fix
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
9c78e6c26f
|
base PPO model only customizes reward for 3AC
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
6048f60f13
|
get TDQN working with 5 action environment
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
d4db5c3281
|
ensure TDQN class is properly named
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
91683e1dca
|
restructure RL so that user can customize environment
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
ecd1f55abc
|
add rl module
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
9b895500b3
|
initial commit - new dev branch
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
cd3fe44424
|
callback function and TDQN model added
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
01232e9a1f
|
callback function and TDQN model added
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
8eeaab2746
|
add reward function
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
ec813434f5
|
ReinforcementLearningModel
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
2f4d73eb06
|
Revert "ReinforcementLearningModel"
This reverts commit 4d8dfe1ff1daa47276eda77118ddf39c13512a85.
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
c1e7db3130
|
ReinforcementLearningModel
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
05ed1b544f
|
Working base for reinforcement learning model
|
2022-08-24 13:00:40 +02:00 |
|
Matthias
|
a6d78a8615
|
initialize Since parameter properly
closes #7285
|
2022-08-23 06:43:04 +02:00 |
|
Matthias
|
fe7108ae75
|
Convert amount to contracts before comparing for close
closes #7279
|
2022-08-23 06:37:38 +02:00 |
|
Matthias
|
78b161e14c
|
add contract_size to database
|
2022-08-23 06:37:38 +02:00 |
|
Matthias
|
6036018f35
|
Extract contracts_to_amount and amount_to_contracts to standalone functions
|
2022-08-23 06:37:38 +02:00 |
|
Matthias
|
5f38a574ce
|
Add okx broker id
|
2022-08-23 06:37:38 +02:00 |
|
th0rntwig
|
5ce1c69803
|
Improve DBSCAN epsilon identification (#7269)
* Improve DBSCAN epsilon identification
|
2022-08-22 19:57:20 +02:00 |
|
robcaulk
|
96d8882f1e
|
Plug mem leak, add training timer
|
2022-08-22 13:30:30 +02:00 |
|
Matthias
|
f55d5ffd8c
|
Don't fail when --strategy-path is not a valid directory.
closes #7264
|
2022-08-22 09:20:14 +00:00 |
|
Matthias
|
015be770c3
|
ccxt now defaults to base volume for all markets
|
2022-08-22 06:42:14 +02:00 |
|
Matthias
|
f6d832c6d9
|
Add get_option to expose ft_has via method
|
2022-08-21 17:51:46 +02:00 |
|
Matthias
|
87a3115073
|
Add get_open_trade_count() to simplify getting open trade count.
|
2022-08-21 17:08:27 +02:00 |
|
Matthias
|
cdd4745693
|
Merge pull request #7263 from freqtrade/okx_cache_tiers
Okx cache tiers
|
2022-08-20 15:18:13 +02:00 |
|
Matthias
|
5b3f031590
|
Use hyperopt safe amount precision method
|
2022-08-20 14:13:15 +02:00 |
|
Matthias
|
738e95b875
|
Add tests for leverage tiers caching
|
2022-08-20 13:54:54 +02:00 |
|
Matthias
|
b6e8b9df35
|
Use cached leverage tiers
|
2022-08-20 13:01:58 +02:00 |
|