robcaulk
|
b26ed7dea4
|
fix generic reward, add time duration to reward
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
29f0e01c4a
|
expose environment reward parameters to the user config
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
3eb897c2f8
|
reuse callback, allow user to acces all stable_baselines3 agents via config
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
4baa36bdcf
|
fix persist a single training environment for PPO
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
f95602f6bd
|
persist a single training environment.
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
5d4e5e69fe
|
reinforce training with state info, reinforce prediction with state info, restructure config to accommodate all parameters from any user imported model type. Set 5Act to default env on TDQN. Clean example config.
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
b90da46b1b
|
improve price df handling to enable backtesting
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
0475b7cb18
|
remove unuse code and fix coding conventions
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
d60a166fbf
|
multiproc TDQN with xtra callbacks
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
dd382dd370
|
add monitor to eval env so that multiproc can save best_model
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
e5df39e891
|
ensuring best_model is placed in ram and saved to disk and loaded from disk
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
bf7ceba958
|
set cpu threads in config
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
57c488a6f1
|
learning_rate + multicpu changes
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
acf3484e88
|
add multiprocessing variant of ReinforcementLearningPPO
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
13cd18dc9a
|
PPO policy change + verbose=1
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
926023935f
|
make base 3ac and base 5ac environments. TDQN defaults to 3AC.
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
096533bcb9
|
3ac to 5ac
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
718c9d0440
|
action fix
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
9c78e6c26f
|
base PPO model only customizes reward for 3AC
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
6048f60f13
|
get TDQN working with 5 action environment
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
d4db5c3281
|
ensure TDQN class is properly named
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
91683e1dca
|
restructure RL so that user can customize environment
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
ecd1f55abc
|
add rl module
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
9b895500b3
|
initial commit - new dev branch
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
cd3fe44424
|
callback function and TDQN model added
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
01232e9a1f
|
callback function and TDQN model added
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
8eeaab2746
|
add reward function
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
ec813434f5
|
ReinforcementLearningModel
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
2f4d73eb06
|
Revert "ReinforcementLearningModel"
This reverts commit 4d8dfe1ff1daa47276eda77118ddf39c13512a85.
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
c1e7db3130
|
ReinforcementLearningModel
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
05ed1b544f
|
Working base for reinforcement learning model
|
2022-08-24 13:00:40 +02:00 |
|
robcaulk
|
4c0fda400f
|
fix input shape warning for LGBMClassifier, add sample_weights/eval_weights
|
2022-08-16 11:41:53 +02:00 |
|
Robert Caulk
|
c9c128f781
|
finalize logo, improve doc, improve algo overview, fix base tensorflowmodel for mypy
|
2022-08-14 02:49:01 +02:00 |
|
robcaulk
|
58de20af0f
|
make BaseClassifierModel. Add predict_proba to lightgbm
|
2022-08-13 20:07:31 +02:00 |
|
robcaulk
|
b1b76a2dbe
|
debug classifier with predict proba
|
2022-08-13 19:40:24 +02:00 |
|
robcaulk
|
23cc21ce59
|
add predict_proba to base classifier, improve historic predictions handling
|
2022-08-13 19:40:24 +02:00 |
|
robcaulk
|
eb8bde37c1
|
Add lightgbm classifier, add classifier check test, fix classifier bug.
|
2022-08-06 17:51:21 +02:00 |
|
Robert Caulk
|
07763d0d4f
|
add classifier, improve model naming scheme
|
2022-08-06 08:33:55 +02:00 |
|
robcaulk
|
f22b140782
|
fix backtesting bug, undo move of label stat calc, fix example strat exit logic
|
2022-07-29 17:27:35 +02:00 |
|
robcaulk
|
59624181bd
|
isort BaseRegressionModel imports
|
2022-07-29 08:23:44 +02:00 |
|
robcaulk
|
c84d54b35e
|
Fix typing issue, avoid using .get() when unnecessary, convert to fstrings
|
2022-07-29 08:12:50 +02:00 |
|
robcaulk
|
324e54c015
|
fix possible memory leak associated with Catboost Pool object
|
2022-07-26 17:29:29 +02:00 |
|
robcaulk
|
3f149c4067
|
fix return type in BaseTensorFlowModel
|
2022-07-26 16:01:54 +02:00 |
|
robcaulk
|
e213d0ad55
|
isolate data_drawer functions from data_kitchen, accommodate tests, add new test
|
2022-07-26 10:24:14 +02:00 |
|
robcaulk
|
56b17e6f3c
|
allow user to pass test_size = 0 and avoid using eval sets in prediction models
|
2022-07-25 19:40:13 +02:00 |
|
Robert Caulk
|
7b105532d1
|
fix mypy error and add test for principal component analysis
|
2022-07-25 11:46:59 +02:00 |
|
Matthias
|
520ee3f7a1
|
Convert freqAI into packages
|
2022-07-24 17:07:45 +02:00 |
|
Matthias
|
1885deb632
|
More docstring changes
|
2022-07-24 16:54:39 +02:00 |
|
Robert Caulk
|
88e10f7306
|
add exception for not passing timerange. Remove hard coded arguments for CatboostPredictionModels. Update docs
|
2022-07-24 09:01:23 +02:00 |
|
robcaulk
|
f3d46613ee
|
move prediction denormalization into datakitchen. remove duplicate associated code. avoid normalization/denormalization for string dtypes.
|
2022-07-23 17:14:33 +02:00 |
|