robcaulk
|
af9e400562
|
add test coverage, fix bug in base environment. Ensure proper fee is used.
|
2022-11-13 15:31:37 +01:00 |
|
robcaulk
|
81f800a79b
|
switch to using FT calc_profi_pct, reverse entry/exit fees
|
2022-11-13 13:41:17 +01:00 |
|
robcaulk
|
e71a8b8ac1
|
add ability to integrate state info or not, and prevent state info integration during backtesting
|
2022-11-12 18:46:48 +01:00 |
|
robcaulk
|
9c6b97c678
|
ensure normalization acceleration methods are employed in RL
|
2022-11-12 12:01:59 +01:00 |
|
robcaulk
|
6746868ea7
|
store dataprovider to self instead of strategy
|
2022-11-12 11:33:03 +01:00 |
|
robcaulk
|
8d7adfabe9
|
clean RL tests to avoid dir pollution and increase speed
|
2022-10-08 12:10:38 +02:00 |
|
robcaulk
|
488739424d
|
fix reward inconsistency in template
|
2022-10-05 20:55:50 +02:00 |
|
robcaulk
|
cf882fa84e
|
fix tests
|
2022-10-01 20:26:41 +02:00 |
|
Robert Caulk
|
555cc42630
|
Ensure 1 thread is available (for testing purposes)
|
2022-09-29 14:00:09 +02:00 |
|
Robert Caulk
|
dcf6ebe273
|
Update BaseReinforcementLearningModel.py
|
2022-09-29 00:37:03 +02:00 |
|
robcaulk
|
83343dc2f1
|
control number of threads, update doc
|
2022-09-29 00:10:18 +02:00 |
|
robcaulk
|
647200e8a7
|
isort
|
2022-09-23 19:30:56 +02:00 |
|
robcaulk
|
77c360b264
|
improve typing, improve docstrings, ensure global tests pass
|
2022-09-23 19:17:27 +02:00 |
|
Robert Caulk
|
f5cd8f62c6
|
Remove unused code from BaseEnv
|
2022-09-23 10:24:39 +02:00 |
|
robcaulk
|
7295ba0fb2
|
add test for Base4ActionEnv
|
2022-09-22 23:42:33 +02:00 |
|
robcaulk
|
eeebb78a5c
|
skip darwin in RL tests, remove example scripts, improve doc
|
2022-09-22 21:16:21 +02:00 |
|
robcaulk
|
7b1d409c98
|
fix mypy/flake8
|
2022-09-17 17:51:06 +02:00 |
|
robcaulk
|
3b97b3d5c8
|
fix mypy error for strategy
|
2022-09-15 00:56:51 +02:00 |
|
robcaulk
|
8aac644009
|
add tests. add guardrails.
|
2022-09-15 00:46:35 +02:00 |
|
robcaulk
|
48140bff91
|
fix bug in 4ActRLEnv
|
2022-09-14 22:53:53 +02:00 |
|
robcaulk
|
27dce20b29
|
fix bug in Base4ActionRLEnv, improve example strats
|
2022-09-04 11:21:54 +02:00 |
|
Richard Jozsa
|
2493e0c8a5
|
Unnecessary lines in Base4, and changes for box space, to fit better for our needs (#7324)
|
2022-08-31 16:37:02 +02:00 |
|
robcaulk
|
7766350c15
|
refactor environment inheritence tree to accommodate flexible action types/counts. fix bug in train profit handling
|
2022-08-28 19:21:57 +02:00 |
|
robcaulk
|
baa4f8e3d0
|
remove Base3ActionEnv in favor of Base4Action
|
2022-08-26 11:04:25 +02:00 |
|
richardjozsa
|
d31926efdf
|
Added Base4Action
|
2022-08-26 11:04:25 +02:00 |
|
robcaulk
|
3199eb453b
|
reduce code for base use-case, ensure multiproc inherits custom env, add ability to limit ram use.
|
2022-08-25 19:05:51 +02:00 |
|
robcaulk
|
94cfc8e63f
|
fix multiproc callback, add continual learning to multiproc, fix totalprofit bug in env, set eval_freq automatically, improve default reward
|
2022-08-25 11:46:18 +02:00 |
|
robcaulk
|
d1bee29b1e
|
improve default reward, fix bugs in environment
|
2022-08-24 18:32:40 +02:00 |
|
robcaulk
|
a61821e1c6
|
remove monitor log
|
2022-08-24 16:33:13 +02:00 |
|
robcaulk
|
bd870e2331
|
fix monitor bug, set default values in case user doesnt set params
|
2022-08-24 16:32:14 +02:00 |
|
robcaulk
|
c0cee5df07
|
add continual retraining feature, handly mypy typing reqs, improve docstrings
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
b26ed7dea4
|
fix generic reward, add time duration to reward
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
280a1dc3f8
|
add live rate, add trade duration
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
f9a49744e6
|
add strategy to the freqai object
|
2022-08-24 13:00:55 +02:00 |
|
richardjozsa
|
a2a4bc05db
|
Fix the state profit calculation logic
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
29f0e01c4a
|
expose environment reward parameters to the user config
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
d88a0dbf82
|
add sb3_contrib models to the available agents. include sb3_contrib in requirements.
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
8b3a8234ac
|
fix env bug, allow example strat to short
|
2022-08-24 13:00:55 +02:00 |
|
mrzdev
|
8cd4daad0a
|
Feat/freqai rl dev (#7)
* access trades through get_trades_proxy method to allow backtesting
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
3eb897c2f8
|
reuse callback, allow user to acces all stable_baselines3 agents via config
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
4b9499e321
|
improve nomenclature and fix short exit bug
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
4baa36bdcf
|
fix persist a single training environment for PPO
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
f95602f6bd
|
persist a single training environment.
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
5d4e5e69fe
|
reinforce training with state info, reinforce prediction with state info, restructure config to accommodate all parameters from any user imported model type. Set 5Act to default env on TDQN. Clean example config.
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
7962a1439b
|
remove keep low profit
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
81b5aa66e8
|
make env keep current position when low profit
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
45218faeb0
|
fix coding convention
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
b90da46b1b
|
improve price df handling to enable backtesting
|
2022-08-24 13:00:55 +02:00 |
|
MukavaValkku
|
2080ff86ed
|
5ac base fixes in logic
|
2022-08-24 13:00:55 +02:00 |
|
sonnhfit
|
0475b7cb18
|
remove unuse code and fix coding conventions
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
bf7ceba958
|
set cpu threads in config
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
acf3484e88
|
add multiprocessing variant of ReinforcementLearningPPO
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
926023935f
|
make base 3ac and base 5ac environments. TDQN defaults to 3AC.
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
91683e1dca
|
restructure RL so that user can customize environment
|
2022-08-24 13:00:55 +02:00 |
|