robcaulk
|
581a5296cc
|
fix docstrings to reflect new env_info changes
|
2022-12-15 16:50:08 +01:00 |
|
robcaulk
|
7b4abd5ef5
|
use a dictionary to make code more readable
|
2022-12-15 12:25:33 +01:00 |
|
Emre
|
2018da0767
|
Add env_info dict to base environment
|
2022-12-14 22:03:05 +03:00 |
|
robcaulk
|
2285ca7d2a
|
add dp to multiproc
|
2022-12-14 18:22:20 +01:00 |
|
robcaulk
|
24766928ba
|
reorganize/generalize tensorboard callback
|
2022-12-04 13:54:30 +01:00 |
|
smarmau
|
b2edc58089
|
fix flake8
|
2022-12-03 22:31:02 +11:00 |
|
smarmau
|
469aa0d43f
|
add state/action info to callbacks
|
2022-12-03 21:16:46 +11:00 |
|
Emre
|
4a9982f86b
|
Fix sb3_contrib loading issue
|
2022-12-01 10:08:42 +03:00 |
|
robcaulk
|
e7f72d52b8
|
bring back market side setting in get_state_info
|
2022-11-30 12:36:26 +01:00 |
|
Emre
|
9cbfa12011
|
Directly set model_type in base RL model
|
2022-11-28 16:02:17 +03:00 |
|
Matthias
|
7ebc8ee169
|
Fix missing Optional typehint
|
2022-11-26 13:32:18 +01:00 |
|
Matthias
|
bdfedb5fcb
|
Improve typehints / reduce warnings from mypy
|
2022-11-26 13:03:07 +01:00 |
|
robcaulk
|
3a07749fcc
|
fix docstring
|
2022-11-24 18:46:54 +01:00 |
|
Matthias
|
8f1a8c752b
|
Add freqairl docker build process
|
2022-11-24 07:00:12 +01:00 |
|
robcaulk
|
60fcd8dce2
|
fix skipped mac test, fix RL bug in add_state_info, fix use of __import__, revise doc
|
2022-11-17 21:50:02 +01:00 |
|
robcaulk
|
6394ef4558
|
fix docstrings
|
2022-11-13 17:43:52 +01:00 |
|
robcaulk
|
af9e400562
|
add test coverage, fix bug in base environment. Ensure proper fee is used.
|
2022-11-13 15:31:37 +01:00 |
|
robcaulk
|
81f800a79b
|
switch to using FT calc_profi_pct, reverse entry/exit fees
|
2022-11-13 13:41:17 +01:00 |
|
robcaulk
|
e71a8b8ac1
|
add ability to integrate state info or not, and prevent state info integration during backtesting
|
2022-11-12 18:46:48 +01:00 |
|
robcaulk
|
9c6b97c678
|
ensure normalization acceleration methods are employed in RL
|
2022-11-12 12:01:59 +01:00 |
|
robcaulk
|
6746868ea7
|
store dataprovider to self instead of strategy
|
2022-11-12 11:33:03 +01:00 |
|
robcaulk
|
8d7adfabe9
|
clean RL tests to avoid dir pollution and increase speed
|
2022-10-08 12:10:38 +02:00 |
|
robcaulk
|
488739424d
|
fix reward inconsistency in template
|
2022-10-05 20:55:50 +02:00 |
|
robcaulk
|
cf882fa84e
|
fix tests
|
2022-10-01 20:26:41 +02:00 |
|
Robert Caulk
|
555cc42630
|
Ensure 1 thread is available (for testing purposes)
|
2022-09-29 14:00:09 +02:00 |
|
Robert Caulk
|
dcf6ebe273
|
Update BaseReinforcementLearningModel.py
|
2022-09-29 00:37:03 +02:00 |
|
robcaulk
|
83343dc2f1
|
control number of threads, update doc
|
2022-09-29 00:10:18 +02:00 |
|
robcaulk
|
647200e8a7
|
isort
|
2022-09-23 19:30:56 +02:00 |
|
robcaulk
|
77c360b264
|
improve typing, improve docstrings, ensure global tests pass
|
2022-09-23 19:17:27 +02:00 |
|
robcaulk
|
7295ba0fb2
|
add test for Base4ActionEnv
|
2022-09-22 23:42:33 +02:00 |
|
robcaulk
|
7b1d409c98
|
fix mypy/flake8
|
2022-09-17 17:51:06 +02:00 |
|
robcaulk
|
3b97b3d5c8
|
fix mypy error for strategy
|
2022-09-15 00:56:51 +02:00 |
|
robcaulk
|
8aac644009
|
add tests. add guardrails.
|
2022-09-15 00:46:35 +02:00 |
|
robcaulk
|
7766350c15
|
refactor environment inheritence tree to accommodate flexible action types/counts. fix bug in train profit handling
|
2022-08-28 19:21:57 +02:00 |
|
robcaulk
|
3199eb453b
|
reduce code for base use-case, ensure multiproc inherits custom env, add ability to limit ram use.
|
2022-08-25 19:05:51 +02:00 |
|
robcaulk
|
94cfc8e63f
|
fix multiproc callback, add continual learning to multiproc, fix totalprofit bug in env, set eval_freq automatically, improve default reward
|
2022-08-25 11:46:18 +02:00 |
|
robcaulk
|
d1bee29b1e
|
improve default reward, fix bugs in environment
|
2022-08-24 18:32:40 +02:00 |
|
robcaulk
|
a61821e1c6
|
remove monitor log
|
2022-08-24 16:33:13 +02:00 |
|
robcaulk
|
bd870e2331
|
fix monitor bug, set default values in case user doesnt set params
|
2022-08-24 16:32:14 +02:00 |
|
robcaulk
|
c0cee5df07
|
add continual retraining feature, handly mypy typing reqs, improve docstrings
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
b26ed7dea4
|
fix generic reward, add time duration to reward
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
280a1dc3f8
|
add live rate, add trade duration
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
f9a49744e6
|
add strategy to the freqai object
|
2022-08-24 13:00:55 +02:00 |
|
richardjozsa
|
a2a4bc05db
|
Fix the state profit calculation logic
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
29f0e01c4a
|
expose environment reward parameters to the user config
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
d88a0dbf82
|
add sb3_contrib models to the available agents. include sb3_contrib in requirements.
|
2022-08-24 13:00:55 +02:00 |
|
mrzdev
|
8cd4daad0a
|
Feat/freqai rl dev (#7)
* access trades through get_trades_proxy method to allow backtesting
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
3eb897c2f8
|
reuse callback, allow user to acces all stable_baselines3 agents via config
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
f95602f6bd
|
persist a single training environment.
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
5d4e5e69fe
|
reinforce training with state info, reinforce prediction with state info, restructure config to accommodate all parameters from any user imported model type. Set 5Act to default env on TDQN. Clean example config.
|
2022-08-24 13:00:55 +02:00 |
|