Commit Graph

17530 Commits

Author SHA1 Message Date
robcaulk 3199eb453b reduce code for base use-case, ensure multiproc inherits custom env, add ability to limit ram use. 2022-08-25 19:05:51 +02:00
robcaulk 05ccebf9a1 automate eval freq in multiproc 2022-08-25 12:29:48 +02:00
robcaulk 94cfc8e63f fix multiproc callback, add continual learning to multiproc, fix totalprofit bug in env, set eval_freq automatically, improve default reward 2022-08-25 11:46:18 +02:00
robcaulk d1bee29b1e improve default reward, fix bugs in environment 2022-08-24 18:32:40 +02:00
robcaulk a61821e1c6 remove monitor log 2022-08-24 16:33:13 +02:00
robcaulk bd870e2331 fix monitor bug, set default values in case user doesnt set params 2022-08-24 16:32:14 +02:00
robcaulk c0cee5df07 add continual retraining feature, handly mypy typing reqs, improve docstrings 2022-08-24 13:00:55 +02:00
robcaulk b708134c1a switch multiproc thread count to rl_config definition 2022-08-24 13:00:55 +02:00
robcaulk b26ed7dea4 fix generic reward, add time duration to reward 2022-08-24 13:00:55 +02:00
robcaulk 280a1dc3f8 add live rate, add trade duration 2022-08-24 13:00:55 +02:00
robcaulk f9a49744e6 add strategy to the freqai object 2022-08-24 13:00:55 +02:00
richardjozsa a2a4bc05db Fix the state profit calculation logic 2022-08-24 13:00:55 +02:00
robcaulk 29f0e01c4a expose environment reward parameters to the user config 2022-08-24 13:00:55 +02:00
robcaulk d88a0dbf82 add sb3_contrib models to the available agents. include sb3_contrib in requirements. 2022-08-24 13:00:55 +02:00
robcaulk 8b3a8234ac fix env bug, allow example strat to short 2022-08-24 13:00:55 +02:00
mrzdev 8cd4daad0a Feat/freqai rl dev (#7)
* access trades through get_trades_proxy method to allow backtesting
2022-08-24 13:00:55 +02:00
robcaulk 3eb897c2f8 reuse callback, allow user to acces all stable_baselines3 agents via config 2022-08-24 13:00:55 +02:00
robcaulk 4b9499e321 improve nomenclature and fix short exit bug 2022-08-24 13:00:55 +02:00
sonnhfit 4baa36bdcf fix persist a single training environment for PPO 2022-08-24 13:00:55 +02:00
robcaulk f95602f6bd persist a single training environment. 2022-08-24 13:00:55 +02:00
robcaulk 5d4e5e69fe reinforce training with state info, reinforce prediction with state info, restructure config to accommodate all parameters from any user imported model type. Set 5Act to default env on TDQN. Clean example config. 2022-08-24 13:00:55 +02:00
sonnhfit 7962a1439b remove keep low profit 2022-08-24 13:00:55 +02:00
sonnhfit 81b5aa66e8 make env keep current position when low profit 2022-08-24 13:00:55 +02:00
sonnhfit 45218faeb0 fix coding convention 2022-08-24 13:00:55 +02:00
richardjozsa d55092ff17 Docker building update, and TDQN repair with the newer release of SB+ 2022-08-24 13:00:55 +02:00
robcaulk 74e4fd0633 ensure config example can work with backtesting RL 2022-08-24 13:00:55 +02:00
robcaulk b90da46b1b improve price df handling to enable backtesting 2022-08-24 13:00:55 +02:00
MukavaValkku 2080ff86ed 5ac base fixes in logic 2022-08-24 13:00:55 +02:00
robcaulk 16cec7dfbd fix save/reload functionality for stablebaselines 2022-08-24 13:00:55 +02:00
sonnhfit 0475b7cb18 remove unuse code and fix coding conventions 2022-08-24 13:00:55 +02:00
MukavaValkku d60a166fbf multiproc TDQN with xtra callbacks 2022-08-24 13:00:55 +02:00
robcaulk dd382dd370 add monitor to eval env so that multiproc can save best_model 2022-08-24 13:00:55 +02:00
robcaulk 69d542d3e2 match config and strats to upstream freqai 2022-08-24 13:00:55 +02:00
robcaulk e5df39e891 ensuring best_model is placed in ram and saved to disk and loaded from disk 2022-08-24 13:00:55 +02:00
robcaulk bf7ceba958 set cpu threads in config 2022-08-24 13:00:55 +02:00
MukavaValkku 57c488a6f1 learning_rate + multicpu changes 2022-08-24 13:00:55 +02:00
MukavaValkku 48bb51b458 example config added 2022-08-24 13:00:55 +02:00
MukavaValkku b1fc5a06ca example config added 2022-08-24 13:00:55 +02:00
sonnhfit 6d8e838a8f update tensorboard dependency 2022-08-24 13:00:55 +02:00
robcaulk acf3484e88 add multiprocessing variant of ReinforcementLearningPPO 2022-08-24 13:00:55 +02:00
MukavaValkku cf0731095f type fix 2022-08-24 13:00:55 +02:00
MukavaValkku 1c81ec6016 3ac and 5ac example strategies 2022-08-24 13:00:55 +02:00
MukavaValkku 13cd18dc9a PPO policy change + verbose=1 2022-08-24 13:00:55 +02:00
robcaulk 926023935f make base 3ac and base 5ac environments. TDQN defaults to 3AC. 2022-08-24 13:00:55 +02:00
MukavaValkku 096533bcb9 3ac to 5ac 2022-08-24 13:00:55 +02:00
MukavaValkku 718c9d0440 action fix 2022-08-24 13:00:55 +02:00
robcaulk 9c78e6c26f base PPO model only customizes reward for 3AC 2022-08-24 13:00:55 +02:00
robcaulk 6048f60f13 get TDQN working with 5 action environment 2022-08-24 13:00:55 +02:00
robcaulk d4db5c3281 ensure TDQN class is properly named 2022-08-24 13:00:55 +02:00
robcaulk 91683e1dca restructure RL so that user can customize environment 2022-08-24 13:00:55 +02:00