Richard Jozsa
|
2493e0c8a5
|
Unnecessary lines in Base4, and changes for box space, to fit better for our needs (#7324)
|
2022-08-31 16:37:02 +02:00 |
|
Richard Jozsa
|
1a8e1362a1
|
There was an error in the docs around continual learning and thread count (#7314)
* Error in the docs
|
2022-08-29 11:15:06 +02:00 |
|
robcaulk
|
67cddae756
|
fix tensorboard image
|
2022-08-28 21:00:26 +02:00 |
|
robcaulk
|
af8f308584
|
start the reinforcement learning doc
|
2022-08-28 20:52:03 +02:00 |
|
robcaulk
|
7766350c15
|
refactor environment inheritence tree to accommodate flexible action types/counts. fix bug in train profit handling
|
2022-08-28 19:21:57 +02:00 |
|
robcaulk
|
8c313b431d
|
remove whitespace from Dockerfile
|
2022-08-26 11:14:01 +02:00 |
|
robcaulk
|
baa4f8e3d0
|
remove Base3ActionEnv in favor of Base4Action
|
2022-08-26 11:04:25 +02:00 |
|
richardjozsa
|
cdc550da9a
|
Revert the docker changes to be inline with the original freqtrade image
Reverted the changes, and added a new way of doing, Dockerfile.freqai with that file the users can make their own dockerimage.
|
2022-08-26 11:04:25 +02:00 |
|
richardjozsa
|
d31926efdf
|
Added Base4Action
|
2022-08-26 11:04:25 +02:00 |
|
robcaulk
|
3199eb453b
|
reduce code for base use-case, ensure multiproc inherits custom env, add ability to limit ram use.
|
2022-08-25 19:05:51 +02:00 |
|
robcaulk
|
05ccebf9a1
|
automate eval freq in multiproc
|
2022-08-25 12:29:48 +02:00 |
|
robcaulk
|
94cfc8e63f
|
fix multiproc callback, add continual learning to multiproc, fix totalprofit bug in env, set eval_freq automatically, improve default reward
|
2022-08-25 11:46:18 +02:00 |
|
robcaulk
|
d1bee29b1e
|
improve default reward, fix bugs in environment
|
2022-08-24 18:32:40 +02:00 |
|
robcaulk
|
a61821e1c6
|
remove monitor log
|
2022-08-24 16:33:13 +02:00 |
|
robcaulk
|
bd870e2331
|
fix monitor bug, set default values in case user doesnt set params
|
2022-08-24 16:32:14 +02:00 |
|
robcaulk
|
c0cee5df07
|
add continual retraining feature, handly mypy typing reqs, improve docstrings
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
b708134c1a
|
switch multiproc thread count to rl_config definition
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
b26ed7dea4
|
fix generic reward, add time duration to reward
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
280a1dc3f8
|
add live rate, add trade duration
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
f9a49744e6
|
add strategy to the freqai object
|
2022-08-24 13:00:55 +02:00 |
|
richardjozsa
|
a2a4bc05db
|
Fix the state profit calculation logic
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
29f0e01c4a
|
expose environment reward parameters to the user config
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
d88a0dbf82
|
add sb3_contrib models to the available agents. include sb3_contrib in requirements.
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
8b3a8234ac
|
fix env bug, allow example strat to short
|
2022-08-24 13:00:55 +02:00 |
|
mrzdev
|
8cd4daad0a
|
Feat/freqai rl dev (#7)
* access trades through get_trades_proxy method to allow backtesting
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
3eb897c2f8
|
reuse callback, allow user to acces all stable_baselines3 agents via config
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
4b9499e321
|
improve nomenclature and fix short exit bug
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
4baa36bdcf
|
fix persist a single training environment for PPO
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
f95602f6bd
|
persist a single training environment.
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
5d4e5e69fe
|
reinforce training with state info, reinforce prediction with state info, restructure config to accommodate all parameters from any user imported model type. Set 5Act to default env on TDQN. Clean example config.
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
7962a1439b
|
remove keep low profit
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
81b5aa66e8
|
make env keep current position when low profit
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
45218faeb0
|
fix coding convention
|
2022-08-24 13:00:55 +02:00 |
|
richardjozsa
|
d55092ff17
|
Docker building update, and TDQN repair with the newer release of SB+
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
74e4fd0633
|
ensure config example can work with backtesting RL
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
b90da46b1b
|
improve price df handling to enable backtesting
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
2080ff86ed
|
5ac base fixes in logic
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
16cec7dfbd
|
fix save/reload functionality for stablebaselines
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
0475b7cb18
|
remove unuse code and fix coding conventions
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
d60a166fbf
|
multiproc TDQN with xtra callbacks
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
dd382dd370
|
add monitor to eval env so that multiproc can save best_model
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
69d542d3e2
|
match config and strats to upstream freqai
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
e5df39e891
|
ensuring best_model is placed in ram and saved to disk and loaded from disk
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
bf7ceba958
|
set cpu threads in config
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
57c488a6f1
|
learning_rate + multicpu changes
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
48bb51b458
|
example config added
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
b1fc5a06ca
|
example config added
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
6d8e838a8f
|
update tensorboard dependency
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
acf3484e88
|
add multiprocessing variant of ReinforcementLearningPPO
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
cf0731095f
|
type fix
|
2022-08-24 13:00:55 +02:00 |
|