Compare commits

..

179 Commits

Author SHA1 Message Date
Matthias
0afd5a7385 Improve stoploss documentation
closes #8492
2023-04-12 18:13:16 +02:00
Matthias
2131205db6 Bump tag length to 255 2023-04-12 07:19:36 +02:00
Matthias
b2b19915e6 Limit enter_tag and exit_reason to their actual field lenght
closes #8486
2023-04-12 07:19:36 +02:00
Matthias
bba6f8e133 Use length constant for tests 2023-04-12 07:19:36 +02:00
Matthias
a6d2233b95 Use constant for custom field lengths 2023-04-11 21:05:14 +02:00
Matthias
9857675a5e Update torch import 2023-04-11 19:38:24 +02:00
Robert Caulk
4ab047dfa7
Merge pull request #8297 from Yinon-Polak/feat/add-pytorch-model-support
Feat/add pytorch model support
2023-04-11 15:40:12 +02:00
Matthias
476ed938f5 Extract custom_tag limit from interface file 2023-04-11 07:26:38 +02:00
Matthias
40ffac9de0 Prevent random test failures by freezing time for certain tests 2023-04-10 19:45:24 +02:00
Matthias
b892d373cd Improve timerange parsing when accepting values from API 2023-04-10 19:45:24 +02:00
Matthias
c3647e49ad
Merge pull request #8484 from freqtrade/dependabot/pip/develop/nbconvert-7.3.1
Bump nbconvert from 7.2.10 to 7.3.1
2023-04-10 19:38:12 +02:00
Matthias
37ed37dc76
Merge pull request #8485 from freqtrade/dependabot/pip/develop/mkdocs-material-9.1.6
Bump mkdocs-material from 9.1.5 to 9.1.6
2023-04-10 19:37:54 +02:00
Matthias
5cb688c112
Merge pull request #8482 from freqtrade/dependabot/pip/develop/websockets-11.0.1
Bump websockets from 11.0 to 11.0.1
2023-04-10 19:37:37 +02:00
Matthias
3e394d0612
Merge pull request #8480 from freqtrade/dependabot/pip/develop/sqlalchemy-2.0.9
Bump sqlalchemy from 2.0.8 to 2.0.9
2023-04-10 19:37:17 +02:00
dependabot[bot]
c4c2298686
Bump mkdocs-material from 9.1.5 to 9.1.6
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 9.1.5 to 9.1.6.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.1.5...9.1.6)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 16:17:10 +00:00
dependabot[bot]
8564dc10b2
Bump nbconvert from 7.2.10 to 7.3.1
Bumps [nbconvert](https://github.com/jupyter/nbconvert) from 7.2.10 to 7.3.1.
- [Release notes](https://github.com/jupyter/nbconvert/releases)
- [Changelog](https://github.com/jupyter/nbconvert/blob/main/CHANGELOG.md)
- [Commits](https://github.com/jupyter/nbconvert/compare/v7.2.10...v7.3.1)

---
updated-dependencies:
- dependency-name: nbconvert
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 16:16:42 +00:00
Matthias
3fb892fcb8
Merge pull request #8483 from freqtrade/dependabot/pip/develop/ruff-0.0.261
Bump ruff from 0.0.260 to 0.0.261
2023-04-10 18:16:24 +02:00
Matthias
9968348324
Merge pull request #8481 from freqtrade/dependabot/pip/develop/ccxt-3.0.59
Bump ccxt from 3.0.58 to 3.0.59
2023-04-10 18:15:44 +02:00
dependabot[bot]
fa293c54f8
Bump websockets from 11.0 to 11.0.1
Bumps [websockets](https://github.com/aaugustin/websockets) from 11.0 to 11.0.1.
- [Release notes](https://github.com/aaugustin/websockets/releases)
- [Commits](https://github.com/aaugustin/websockets/compare/11.0...11.0.1)

---
updated-dependencies:
- dependency-name: websockets
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 15:46:40 +00:00
Matthias
95449ca886
Merge pull request #8478 from freqtrade/dependabot/pip/develop/schedule-1.2.0
Bump schedule from 1.1.0 to 1.2.0
2023-04-10 17:45:44 +02:00
Matthias
70fa4a53cd
pre-commit - bump sqlalchemy 2023-04-10 17:45:23 +02:00
dependabot[bot]
467c63ff01
Bump ruff from 0.0.260 to 0.0.261
Bumps [ruff](https://github.com/charliermarsh/ruff) from 0.0.260 to 0.0.261.
- [Release notes](https://github.com/charliermarsh/ruff/releases)
- [Changelog](https://github.com/charliermarsh/ruff/blob/main/BREAKING_CHANGES.md)
- [Commits](https://github.com/charliermarsh/ruff/compare/v0.0.260...v0.0.261)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 15:25:04 +00:00
Matthias
b8a9c200fe
Merge pull request #8479 from freqtrade/dependabot/pip/develop/pre-commit-3.2.2
Bump pre-commit from 3.2.1 to 3.2.2
2023-04-10 17:24:02 +02:00
Matthias
7c10af65a1
Merge pull request #8477 from freqtrade/dependabot/pip/develop/plotly-5.14.1
Bump plotly from 5.14.0 to 5.14.1
2023-04-10 16:44:35 +02:00
Matthias
e2cd23b1d2 Remove deprecated pandas option 2023-04-10 16:33:56 +02:00
dependabot[bot]
0d408d3d43
Bump ccxt from 3.0.58 to 3.0.59
Bumps [ccxt](https://github.com/ccxt/ccxt) from 3.0.58 to 3.0.59.
- [Release notes](https://github.com/ccxt/ccxt/releases)
- [Changelog](https://github.com/ccxt/ccxt/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ccxt/ccxt/compare/3.0.58...3.0.59)

---
updated-dependencies:
- dependency-name: ccxt
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 14:20:19 +00:00
dependabot[bot]
2309197771
Bump sqlalchemy from 2.0.8 to 2.0.9
Bumps [sqlalchemy](https://github.com/sqlalchemy/sqlalchemy) from 2.0.8 to 2.0.9.
- [Release notes](https://github.com/sqlalchemy/sqlalchemy/releases)
- [Changelog](https://github.com/sqlalchemy/sqlalchemy/blob/main/CHANGES.rst)
- [Commits](https://github.com/sqlalchemy/sqlalchemy/commits)

---
updated-dependencies:
- dependency-name: sqlalchemy
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 14:20:14 +00:00
dependabot[bot]
66fe9abce0
Bump pre-commit from 3.2.1 to 3.2.2
Bumps [pre-commit](https://github.com/pre-commit/pre-commit) from 3.2.1 to 3.2.2.
- [Release notes](https://github.com/pre-commit/pre-commit/releases)
- [Changelog](https://github.com/pre-commit/pre-commit/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pre-commit/pre-commit/compare/v3.2.1...v3.2.2)

---
updated-dependencies:
- dependency-name: pre-commit
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 14:20:03 +00:00
dependabot[bot]
200c18f3e4
Bump schedule from 1.1.0 to 1.2.0
Bumps [schedule](https://github.com/dbader/schedule) from 1.1.0 to 1.2.0.
- [Release notes](https://github.com/dbader/schedule/releases)
- [Changelog](https://github.com/dbader/schedule/blob/master/HISTORY.rst)
- [Commits](https://github.com/dbader/schedule/compare/1.1.0...1.2.0)

---
updated-dependencies:
- dependency-name: schedule
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 14:19:59 +00:00
dependabot[bot]
351b5f6e65
Bump plotly from 5.14.0 to 5.14.1
Bumps [plotly](https://github.com/plotly/plotly.py) from 5.14.0 to 5.14.1.
- [Release notes](https://github.com/plotly/plotly.py/releases)
- [Changelog](https://github.com/plotly/plotly.py/blob/master/CHANGELOG.md)
- [Commits](https://github.com/plotly/plotly.py/compare/v5.14.0...v5.14.1)

---
updated-dependencies:
- dependency-name: plotly
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 14:19:56 +00:00
Matthias
605cc20a21
Merge pull request #8459 from freqtrade/feat/kvstore
Add initial bot start time to /profit endpoint
2023-04-10 14:49:01 +02:00
Matthias
f73d2a5371 Ensure bot_start is called when visualizing results 2023-04-10 14:48:02 +02:00
Matthias
485a074674
Merge pull request #8472 from freqtrade/dependabot/pip/develop/types-python-dateutil-2.8.19.12
Bump types-python-dateutil from 2.8.19.11 to 2.8.19.12
2023-04-10 14:42:53 +02:00
Matthias
865cf5232b
Merge pull request #8471 from freqtrade/dependabot/pip/develop/mypy-1.2.0
Bump mypy from 1.1.1 to 1.2.0
2023-04-10 14:42:35 +02:00
Matthias
95a24c3133
Merge pull request #8467 from freqtrade/dependabot/pip/develop/orjson-3.8.10
Bump orjson from 3.8.9 to 3.8.10
2023-04-10 14:41:25 +02:00
Matthias
6833059c70
Merge pull request #8474 from freqtrade/dependabot/github_actions/develop/pypa/gh-action-pypi-publish-1.8.5
Bump pypa/gh-action-pypi-publish from 1.8.4 to 1.8.5
2023-04-10 08:03:55 +02:00
Matthias
3833dc0b78
pre-commit - bump dateutil 2023-04-10 07:54:01 +02:00
Matthias
e0d3c771db
Merge pull request #8465 from freqtrade/dependabot/pip/develop/ccxt-3.0.58
Bump ccxt from 3.0.50 to 3.0.58
2023-04-10 07:53:21 +02:00
dependabot[bot]
5a18ab0784
Bump mypy from 1.1.1 to 1.2.0
Bumps [mypy](https://github.com/python/mypy) from 1.1.1 to 1.2.0.
- [Release notes](https://github.com/python/mypy/releases)
- [Commits](https://github.com/python/mypy/compare/v1.1.1...v1.2.0)

---
updated-dependencies:
- dependency-name: mypy
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 05:51:33 +00:00
Matthias
1d66f82b1d
Merge pull request #8469 from freqtrade/dependabot/pip/develop/filelock-3.11.0
Bump filelock from 3.10.6 to 3.11.0
2023-04-10 07:50:48 +02:00
Matthias
2e765fe6d1
Merge pull request #8470 from freqtrade/dependabot/pip/develop/pymdown-extensions-9.11
Bump pymdown-extensions from 9.10 to 9.11
2023-04-10 07:50:25 +02:00
Matthias
21ea02bbcf
Merge pull request #8466 from freqtrade/dependabot/pip/develop/pytest-7.3.0
Bump pytest from 7.2.2 to 7.3.0
2023-04-10 07:49:57 +02:00
dependabot[bot]
2ea0157197
Bump pypa/gh-action-pypi-publish from 1.8.4 to 1.8.5
Bumps [pypa/gh-action-pypi-publish](https://github.com/pypa/gh-action-pypi-publish) from 1.8.4 to 1.8.5.
- [Release notes](https://github.com/pypa/gh-action-pypi-publish/releases)
- [Commits](https://github.com/pypa/gh-action-pypi-publish/compare/v1.8.4...v1.8.5)

---
updated-dependencies:
- dependency-name: pypa/gh-action-pypi-publish
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 03:57:51 +00:00
dependabot[bot]
03352f3b62
Bump types-python-dateutil from 2.8.19.11 to 2.8.19.12
Bumps [types-python-dateutil](https://github.com/python/typeshed) from 2.8.19.11 to 2.8.19.12.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-python-dateutil
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 03:57:04 +00:00
dependabot[bot]
26eb4f7fe6
Bump pymdown-extensions from 9.10 to 9.11
Bumps [pymdown-extensions](https://github.com/facelessuser/pymdown-extensions) from 9.10 to 9.11.
- [Release notes](https://github.com/facelessuser/pymdown-extensions/releases)
- [Commits](https://github.com/facelessuser/pymdown-extensions/compare/9.10...9.11)

---
updated-dependencies:
- dependency-name: pymdown-extensions
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 03:56:57 +00:00
dependabot[bot]
7e1f3aa545
Bump filelock from 3.10.6 to 3.11.0
Bumps [filelock](https://github.com/tox-dev/py-filelock) from 3.10.6 to 3.11.0.
- [Release notes](https://github.com/tox-dev/py-filelock/releases)
- [Changelog](https://github.com/tox-dev/py-filelock/blob/main/docs/changelog.rst)
- [Commits](https://github.com/tox-dev/py-filelock/compare/3.10.6...3.11.0)

---
updated-dependencies:
- dependency-name: filelock
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 03:56:51 +00:00
dependabot[bot]
14532e3a56
Bump orjson from 3.8.9 to 3.8.10
Bumps [orjson](https://github.com/ijl/orjson) from 3.8.9 to 3.8.10.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.8.9...3.8.10)

---
updated-dependencies:
- dependency-name: orjson
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 03:56:42 +00:00
dependabot[bot]
a449f7c78c
Bump pytest from 7.2.2 to 7.3.0
Bumps [pytest](https://github.com/pytest-dev/pytest) from 7.2.2 to 7.3.0.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/7.2.2...7.3.0)

---
updated-dependencies:
- dependency-name: pytest
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 03:56:38 +00:00
dependabot[bot]
8854ef8cba
Bump ccxt from 3.0.50 to 3.0.58
Bumps [ccxt](https://github.com/ccxt/ccxt) from 3.0.50 to 3.0.58.
- [Release notes](https://github.com/ccxt/ccxt/releases)
- [Changelog](https://github.com/ccxt/ccxt/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ccxt/ccxt/compare/3.0.50...3.0.58)

---
updated-dependencies:
- dependency-name: ccxt
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 03:56:33 +00:00
Matthias
526943f29e Remove freqUI alpha warning 2023-04-09 19:44:38 +02:00
Matthias
5404905d28 Fix typos in docs 2023-04-08 17:13:51 +02:00
Matthias
bed51fa790 Properly build specific Torch image 2023-04-08 17:00:25 +02:00
Matthias
f5a5c2d6b9 Improve imports 2023-04-08 16:44:33 +02:00
Matthias
a102cfdfc9 Add new /profit fields to API 2023-04-08 16:41:25 +02:00
Matthias
be72670ca2 Add documentation about /profit change 2023-04-08 16:40:14 +02:00
Matthias
cf2cb94f8d Add bot start date to /profit output 2023-04-08 16:38:44 +02:00
Matthias
fa3a81b022 convert Keys to enum 2023-04-08 16:28:50 +02:00
Matthias
7ff30c6df8 Add additional, typesafe getters 2023-04-08 16:24:38 +02:00
Matthias
7751768b2e Store initial_time value 2023-04-08 16:13:16 +02:00
robcaulk
69b9b35a08 Merge remote-tracking branch 'origin/develop' into feat/add-pytorch-model-support 2023-04-08 13:22:25 +02:00
robcaulk
48d3c8e62e fix model loading from disk bug, improve doc, clarify installation/docker instructions, add a torch tag to the freqairl docker image. Fix seriously outdated prediction_model docstrings 2023-04-08 12:09:53 +02:00
Matthias
ac817b7808 Improve docstrings for key-value store 2023-04-08 10:09:31 +02:00
Matthias
4d4f4bf23e Add test for key_value_store 2023-04-08 10:07:21 +02:00
Matthias
c083723698 Add initial version of key value store 2023-04-08 10:07:03 +02:00
Yinon Polak
a655524221 pytorch mlp rename input to fix mypy error 2023-04-04 12:24:29 +03:00
Yinon Polak
26738370c7 pytorch mlp add explicit annotation to fix mypy error 2023-04-04 12:12:02 +03:00
Yinon Polak
6b204c97ed fix pytorch data convertor type hints 2023-04-03 19:02:07 +03:00
Yinon Polak
0c4574b3b7 prevent mypy error, explicitly unpack input list of pytorch mlp model, 2023-04-03 18:10:47 +03:00
Yinon Polak
d9d9993179 add documentation 2023-04-03 17:06:39 +03:00
Yinon Polak
7b494c8333 add documentation to pytorch data convertor 2023-04-03 16:39:49 +03:00
Yinon Polak
bc9454e0f9 add device to data convertor class doc 2023-04-03 16:36:38 +03:00
Yinon Polak
36a0a14a23 clean code 2023-04-03 16:26:42 +03:00
Yinon Polak
c137666230 fix imports 2023-04-03 16:03:15 +03:00
Yinon Polak
bd3b70293f add pytorch data convertor 2023-04-03 15:19:10 +03:00
Yinon Polak
5a7ca35c6b declare class names in FreqaiExampleHybridStrategy 2023-03-28 16:24:49 +03:00
Yinon Polak
077a947972 clean code 2023-03-28 15:18:10 +03:00
Yinon Polak
8ac3a94358 add note to pytorch docs - setting class names for classifiers 2023-03-28 15:17:40 +03:00
Yinon Polak
dfbebdea9b improve comment on class_names in freqai interface 2023-03-28 14:44:44 +03:00
Yinon Polak
b795a70102 fix config example in pytorch mlp documentation 2023-03-28 14:44:43 +03:00
Yinon Polak
026b6a39a9 bugfix skip test split when empty 2023-03-28 14:40:23 +03:00
Yinon Polak
8903ba5d89 fix enf of file 2023-03-24 20:35:55 +03:00
Yinon Polak
eabd321281 small docs change 2023-03-23 15:59:57 +02:00
Yinon Polak
45c6ae446f small docs change 2023-03-23 15:04:29 +02:00
Yinon Polak
952e641213 small docs change 2023-03-23 12:43:37 +02:00
Yinon Polak
c44b5b1b3a add pytorch parameters to parameter table docs 2023-03-23 12:41:20 +02:00
Yinon Polak
fc8625c5c5 add pytorch classes uml diagram 2023-03-23 12:13:27 +02:00
Yinon Polak
36a005754a add pytorch documentation 2023-03-22 18:15:57 +02:00
Yinon Polak
479aafc331 rename Torch to PyTorch 2023-03-22 17:50:00 +02:00
Yinon Polak
f81e3d8667 sort imports 2023-03-21 16:42:13 +02:00
Yinon Polak
b9c7d338b3 fix test_start_backtesting 2023-03-21 16:38:05 +02:00
Yinon Polak
4f93106755 Merge remote-tracking branch 'origin/feat/add-pytorch-model-support' into feat/add-pytorch-model-support 2023-03-21 16:26:42 +02:00
Yinon Polak
02bccd0097 add pytorch mlp models to test_start_backtesting 2023-03-21 16:20:35 +02:00
robcaulk
1ba01746a0 organize pytorch files 2023-03-21 15:09:54 +01:00
Yinon Polak
83a7d888bc type hint init in pytorch mlp classes 2023-03-21 15:19:34 +02:00
Yinon Polak
eba82360fa skip pytorch tests on python 3.11 and intel based mac os 2023-03-21 15:18:05 +02:00
Yinon Polak
3fa23860c0 skip pytorch tests on python 3.11 and intel based mac os 2023-03-21 14:34:27 +02:00
Yinon Polak
a80afc8f1b add optional target tensor squeezing to pytorch trainer 2023-03-21 13:20:54 +02:00
Yinon Polak
97339e14cf round up divisions in calc_n_epochs 2023-03-21 12:29:05 +02:00
Yinon Polak
443263803c unsqueeze target tensor when 1 dimensional 2023-03-21 11:42:05 +02:00
Yinon Polak
9906e7d646 clean code 2023-03-21 11:23:45 +02:00
Yinon Polak
e8f040bfbd add class_name attribute to freqai interface 2023-03-20 20:38:43 +02:00
Yinon Polak
a4b617e482 type hints fixes 2023-03-20 20:22:28 +02:00
Yinon Polak
c06cd38951 clean code 2023-03-20 19:55:39 +02:00
Yinon Polak
0a55753faf move default attributes of pytorch classifier to initializer,
to prevent mypy from complaining
2023-03-20 19:40:36 +02:00
Yinon Polak
6b4d9f97c1 clean code 2023-03-20 19:28:30 +02:00
Yinon Polak
bf4aa91aab Merge remote-tracking branch 'origin/feat/add-pytorch-model-support' into feat/add-pytorch-model-support
# Conflicts:
#	freqtrade/freqai/base_models/PyTorchModelTrainer.py
#	freqtrade/freqai/prediction_models/PyTorchClassifier.py
#	freqtrade/freqai/prediction_models/PyTorchMLPClassifier.py
#	freqtrade/freqai/prediction_models/PyTorchMLPModel.py
#	tests/freqai/test_freqai_interface.py
2023-03-20 18:44:24 +02:00
Yinon Polak
500c401b75 improve pytorch classifier documentation 2023-03-20 18:41:04 +02:00
Yinon Polak
81a2cbb4eb fix tests 2023-03-20 18:41:04 +02:00
Yinon Polak
0510cf4491 add config params to tests 2023-03-20 18:41:04 +02:00
Yinon Polak
68728409aa add pytorch regressor test 2023-03-20 18:41:04 +02:00
Yinon Polak
c00ffcee59 fix pytorch classifier test 2023-03-20 18:41:04 +02:00
Yinon Polak
9aec1ddb17 sort imports 2023-03-20 18:41:04 +02:00
Yinon Polak
d98890f32e sort imports 2023-03-20 18:41:04 +02:00
Yinon Polak
f659f8e309 remove unused imports 2023-03-20 18:41:04 +02:00
Yinon Polak
54db239175 add pytorch regressor example 2023-03-20 18:41:04 +02:00
Yinon Polak
601c37f862 refactor classifiers class names 2023-03-20 18:41:04 +02:00
Yinon Polak
501e746c52 improve mlp documentation 2023-03-20 18:41:04 +02:00
Yinon Polak
d04146d1b1 improve mlp documentation 2023-03-20 18:41:04 +02:00
Yinon Polak
ea08931ab3 add mlp documentation 2023-03-20 18:41:04 +02:00
Yinon Polak
ddd1b5c0ff modify feedforward net, move layer norm to start of thr block 2023-03-20 18:41:04 +02:00
Yinon Polak
e08d8190ae fix test 2023-03-20 18:41:04 +02:00
Yinon Polak
fbf7049ac5 sort imports 2023-03-20 18:41:04 +02:00
Yinon Polak
2a1a8c0e64 fix test 2023-03-20 18:41:04 +02:00
Yinon Polak
833aaf8e10 create children class to PyTorchClassifier to implement the fit method where we initialize the trainer and model objects 2023-03-20 18:41:04 +02:00
Yinon Polak
566346dd87 classifier test - set model file extension 2023-03-20 18:41:03 +02:00
Yinon Polak
d0a33d2ee7 fix tests 2023-03-20 18:41:03 +02:00
robcaulk
fab505be1b cheat flake8 for now until we can refactor save into the model class 2023-03-20 18:41:03 +02:00
Yinon Polak
2f386913ac refactor classifiers class names 2023-03-20 11:54:17 +02:00
Yinon Polak
1c11a5f048 improve mlp documentation 2023-03-19 18:10:57 +02:00
Yinon Polak
903a1dc3e5 improve mlp documentation 2023-03-19 18:04:01 +02:00
Yinon Polak
6f9a8a089c add mlp documentation 2023-03-19 17:45:30 +02:00
Yinon Polak
8bee499328 modify feedforward net, move layer norm to start of thr block 2023-03-19 17:03:36 +02:00
Yinon Polak
719faab4b8 fix test 2023-03-19 15:21:34 +02:00
Yinon Polak
9f477aa3c9 sort imports 2023-03-19 15:09:50 +02:00
Yinon Polak
61ac36c576 fix test 2023-03-19 14:49:12 +02:00
Yinon Polak
366c148c10 create children class to PyTorchClassifier to implement the fit method where we initialize the trainer and model objects 2023-03-19 14:38:49 +02:00
Yinon Polak
a49f62eecb classifier test - set model file extension 2023-03-18 20:51:30 +02:00
Yinon Polak
fab9ff1294 fix tests 2023-03-18 15:27:38 +02:00
Yinon Polak
1c91b4427b Merge remote-tracking branch 'origin/feat/add-pytorch-model-support' into feat/add-pytorch-model-support 2023-03-18 14:14:38 +02:00
Yinon Polak
244662b1a4 set class names attribute in the general classifier testing strategy 2023-03-18 14:12:31 +02:00
robcaulk
4550447409 cheat flake8 for now until we can refactor save into the model class 2023-03-14 21:13:30 +01:00
Yinon Polak
366740885a reduce mlp number of parameters for testing 2023-03-13 20:18:26 +02:00
Yinon Polak
918889a2bd reduce mlp number of parameters for testing 2023-03-13 20:09:12 +02:00
Yinon Polak
9c8c30b0e8 add test 2023-03-13 17:17:00 +02:00
Yinon Polak
d7ea750823 revert to using model_training_parameters 2023-03-13 00:35:51 +02:00
Yinon Polak
b6096efadd logging change 2023-03-13 00:35:14 +02:00
Yinon Polak
b927c9dc01 remove train loss calculation from estimate_loss 2023-03-13 00:17:34 +02:00
Yinon Polak
523a58d3d6 simplify statement for pytorch file_type extension 2023-03-13 00:16:44 +02:00
Yinon Polak
0012fe36ca sort imports 2023-03-12 16:16:04 +02:00
Yinon Polak
cb17b36981 simplify file_type check comparisons 2023-03-12 14:50:08 +02:00
Yinon Polak
f9fdf1c31b generalize mlp model 2023-03-12 14:31:08 +02:00
Yinon Polak
1cf0e7be24 use one iteration on all test and train data for evaluation 2023-03-12 12:48:15 +02:00
Yinon Polak
8a9f2aedbb improve documentation 2023-03-09 14:55:52 +02:00
Yinon Polak
e88a0d5248 convert single quotes to double quotes 2023-03-09 13:29:11 +02:00
Yinon Polak
2ef11faba7 reformat documentation 2023-03-09 13:25:20 +02:00
Yinon Polak
c9eee2944b reformat documentation 2023-03-09 13:01:04 +02:00
Yinon Polak
6f962362f2 expand pytorch trainer documentation 2023-03-09 12:45:46 +02:00
Yinon Polak
ba5de0cd00 add documentation 2023-03-09 11:21:10 +02:00
Yinon Polak
3081b9402b add documentation 2023-03-09 11:14:54 +02:00
Yinon Polak
1597c3aa89 set class names in IStrategy.set_freqai_targets method, also save class name with model meta data 2023-03-08 18:36:44 +02:00
Yinon Polak
7d26df01b8 fix tensor type hint 2023-03-08 16:17:19 +02:00
Yinon Polak
c8296ccb2d sort imports 2023-03-08 16:13:35 +02:00
Yinon Polak
8d60327d60 add missing import 2023-03-08 16:12:47 +02:00
Yinon Polak
04564dc134 add missing import 2023-03-08 16:11:51 +02:00
Yinon Polak
6161b858c4 sort imports 2023-03-08 16:10:25 +02:00
Yinon Polak
1921a07b89 sort imports 2023-03-08 16:08:04 +02:00
Yinon Polak
b65ade51be revert config_freqai_example changes 2023-03-08 16:05:02 +02:00
Yinon Polak
dfbb2e2b35 sort imports 2023-03-08 16:03:36 +02:00
Yinon Polak
1805db2b07 change documentation and small bugfix 2023-03-08 15:38:22 +02:00
Yinon Polak
76fbec0c17 ad multiclass target names encoder to ints 2023-03-08 14:29:38 +02:00
Yinon Polak
4241bff32a type hints fixes 2023-03-06 20:15:36 +02:00
Yinon Polak
5dd60eda36 type hints fixes 2023-03-06 19:37:08 +02:00
Yinon Polak
8acdd0b47c type hints fixes 2023-03-06 19:14:54 +02:00
Yinon Polak
125085fbaf add freqai.model_exists pytorch file type support 2023-03-06 18:10:49 +02:00
Yinon Polak
7eedcb9c14 reformat code 2023-03-06 17:56:07 +02:00
Yinon Polak
e6e747bcd8 reformat code 2023-03-06 17:50:02 +02:00
Yinon Polak
348a08f1c4 add todo - currently assuming class labels are strings ['0.0', '1.0' .. n_classes]. need to resolve it per ClassifierModel 2023-03-06 16:41:47 +02:00
Yinon Polak
b1ac2bf515 use data loader, add evaluation on epoch 2023-03-06 16:16:45 +02:00
Yinon Polak
751b205618 initial commit 2023-03-05 16:59:24 +02:00
72 changed files with 1670 additions and 220 deletions

View File

@ -425,7 +425,7 @@ jobs:
python setup.py sdist bdist_wheel python setup.py sdist bdist_wheel
- name: Publish to PyPI (Test) - name: Publish to PyPI (Test)
uses: pypa/gh-action-pypi-publish@v1.8.4 uses: pypa/gh-action-pypi-publish@v1.8.5
if: (github.event_name == 'release') if: (github.event_name == 'release')
with: with:
user: __token__ user: __token__
@ -433,7 +433,7 @@ jobs:
repository_url: https://test.pypi.org/legacy/ repository_url: https://test.pypi.org/legacy/
- name: Publish to PyPI - name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@v1.8.4 uses: pypa/gh-action-pypi-publish@v1.8.5
if: (github.event_name == 'release') if: (github.event_name == 'release')
with: with:
user: __token__ user: __token__

View File

@ -17,8 +17,8 @@ repos:
- types-filelock==3.2.7 - types-filelock==3.2.7
- types-requests==2.28.11.17 - types-requests==2.28.11.17
- types-tabulate==0.9.0.2 - types-tabulate==0.9.0.2
- types-python-dateutil==2.8.19.11 - types-python-dateutil==2.8.19.12
- SQLAlchemy==2.0.8 - SQLAlchemy==2.0.9
# stages: [push] # stages: [push]
- repo: https://github.com/pycqa/isort - repo: https://github.com/pycqa/isort

View File

@ -12,6 +12,7 @@ TAG=$(echo "${BRANCH_NAME}" | sed -e "s/\//_/g")
TAG_PLOT=${TAG}_plot TAG_PLOT=${TAG}_plot
TAG_FREQAI=${TAG}_freqai TAG_FREQAI=${TAG}_freqai
TAG_FREQAI_RL=${TAG_FREQAI}rl TAG_FREQAI_RL=${TAG_FREQAI}rl
TAG_FREQAI_TORCH=${TAG_FREQAI}torch
TAG_PI="${TAG}_pi" TAG_PI="${TAG}_pi"
TAG_ARM=${TAG}_arm TAG_ARM=${TAG}_arm
@ -84,6 +85,10 @@ docker manifest push -p ${IMAGE_NAME}:${TAG_FREQAI}
docker manifest create ${IMAGE_NAME}:${TAG_FREQAI_RL} ${CACHE_IMAGE}:${TAG_FREQAI_RL} ${CACHE_IMAGE}:${TAG_FREQAI_RL_ARM} docker manifest create ${IMAGE_NAME}:${TAG_FREQAI_RL} ${CACHE_IMAGE}:${TAG_FREQAI_RL} ${CACHE_IMAGE}:${TAG_FREQAI_RL_ARM}
docker manifest push -p ${IMAGE_NAME}:${TAG_FREQAI_RL} docker manifest push -p ${IMAGE_NAME}:${TAG_FREQAI_RL}
# Create special Torch tag - which is identical to the RL tag.
docker manifest create ${IMAGE_NAME}:${TAG_FREQAI_TORCH} ${CACHE_IMAGE}:${TAG_FREQAI_RL} ${CACHE_IMAGE}:${TAG_FREQAI_RL_ARM}
docker manifest push -p ${IMAGE_NAME}:${TAG_FREQAI_TORCH}
# copy images to ghcr.io # copy images to ghcr.io
alias crane="docker run --rm -i -v $(pwd)/.crane:/home/nonroot/.docker/ gcr.io/go-containerregistry/crane" alias crane="docker run --rm -i -v $(pwd)/.crane:/home/nonroot/.docker/ gcr.io/go-containerregistry/crane"
@ -93,6 +98,7 @@ chmod a+rwx .crane
echo "${GHCR_TOKEN}" | crane auth login ghcr.io -u "${GHCR_USERNAME}" --password-stdin echo "${GHCR_TOKEN}" | crane auth login ghcr.io -u "${GHCR_USERNAME}" --password-stdin
crane copy ${IMAGE_NAME}:${TAG_FREQAI_RL} ${GHCR_IMAGE_NAME}:${TAG_FREQAI_RL} crane copy ${IMAGE_NAME}:${TAG_FREQAI_RL} ${GHCR_IMAGE_NAME}:${TAG_FREQAI_RL}
crane copy ${IMAGE_NAME}:${TAG_FREQAI_RL} ${GHCR_IMAGE_NAME}:${TAG_FREQAI_TORCH}
crane copy ${IMAGE_NAME}:${TAG_FREQAI} ${GHCR_IMAGE_NAME}:${TAG_FREQAI} crane copy ${IMAGE_NAME}:${TAG_FREQAI} ${GHCR_IMAGE_NAME}:${TAG_FREQAI}
crane copy ${IMAGE_NAME}:${TAG_PLOT} ${GHCR_IMAGE_NAME}:${TAG_PLOT} crane copy ${IMAGE_NAME}:${TAG_PLOT} ${GHCR_IMAGE_NAME}:${TAG_PLOT}
crane copy ${IMAGE_NAME}:${TAG} ${GHCR_IMAGE_NAME}:${TAG} crane copy ${IMAGE_NAME}:${TAG} ${GHCR_IMAGE_NAME}:${TAG}

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

View File

@ -236,3 +236,161 @@ If you want to predict multiple targets you must specify all labels in the same
df['&s-up_or_down'] = np.where( df["close"].shift(-100) > df["close"], 'up', 'down') df['&s-up_or_down'] = np.where( df["close"].shift(-100) > df["close"], 'up', 'down')
df['&s-up_or_down'] = np.where( df["close"].shift(-100) == df["close"], 'same', df['&s-up_or_down']) df['&s-up_or_down'] = np.where( df["close"].shift(-100) == df["close"], 'same', df['&s-up_or_down'])
``` ```
## PyTorch Module
### Quick start
The easiest way to quickly run a pytorch model is with the following command (for regression task):
```bash
freqtrade trade --config config_examples/config_freqai.example.json --strategy FreqaiExampleStrategy --freqaimodel PyTorchMLPRegressor --strategy-path freqtrade/templates
```
!!! note "Installation/docker"
The PyTorch module requires large packages such as `torch`, which should be explicitly requested during `./setup.sh -i` by answering "y" to the question "Do you also want dependencies for freqai-rl or PyTorch (~700mb additional space required) [y/N]?".
Users who prefer docker should ensure they use the docker image appended with `_freqaitorch`.
### Structure
#### Model
You can construct your own Neural Network architecture in PyTorch by simply defining your `nn.Module` class inside your custom [`IFreqaiModel` file](#using-different-prediction-models) and then using that class in your `def train()` function. Here is an example of logistic regression model implementation using PyTorch (should be used with nn.BCELoss criterion) for classification tasks.
```python
class LogisticRegression(nn.Module):
def __init__(self, input_size: int):
super().__init__()
# Define your layers
self.linear = nn.Linear(input_size, 1)
self.activation = nn.Sigmoid()
def forward(self, x: torch.Tensor) -> torch.Tensor:
# Define the forward pass
out = self.linear(x)
out = self.activation(out)
return out
class MyCoolPyTorchClassifier(BasePyTorchClassifier):
"""
This is a custom IFreqaiModel showing how a user might setup their own
custom Neural Network architecture for their training.
"""
@property
def data_convertor(self) -> PyTorchDataConvertor:
return DefaultPyTorchDataConvertor(target_tensor_type=torch.float)
def __init__(self, **kwargs) -> None:
super().__init__(**kwargs)
config = self.freqai_info.get("model_training_parameters", {})
self.learning_rate: float = config.get("learning_rate", 3e-4)
self.model_kwargs: Dict[str, Any] = config.get("model_kwargs", {})
self.trainer_kwargs: Dict[str, Any] = config.get("trainer_kwargs", {})
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
"""
User sets up the training and test data to fit their desired model here
:param data_dictionary: the dictionary holding all data for train, test,
labels, weights
:param dk: The datakitchen object for the current coin/model
"""
class_names = self.get_class_names()
self.convert_label_column_to_int(data_dictionary, dk, class_names)
n_features = data_dictionary["train_features"].shape[-1]
model = LogisticRegression(
input_dim=n_features
)
model.to(self.device)
optimizer = torch.optim.AdamW(model.parameters(), lr=self.learning_rate)
criterion = torch.nn.CrossEntropyLoss()
init_model = self.get_init_model(dk.pair)
trainer = PyTorchModelTrainer(
model=model,
optimizer=optimizer,
criterion=criterion,
model_meta_data={"class_names": class_names},
device=self.device,
init_model=init_model,
data_convertor=self.data_convertor,
**self.trainer_kwargs,
)
trainer.fit(data_dictionary, self.splits)
return trainer
```
#### Trainer
The `PyTorchModelTrainer` performs the idiomatic PyTorch train loop:
Define our model, loss function, and optimizer, and then move them to the appropriate device (GPU or CPU). Inside the loop, we iterate through the batches in the dataloader, move the data to the device, compute the prediction and loss, backpropagate, and update the model parameters using the optimizer.
In addition, the trainer is responsible for the following:
- saving and loading the model
- converting the data from `pandas.DataFrame` to `torch.Tensor`.
#### Integration with Freqai module
Like all freqai models, PyTorch models inherit `IFreqaiModel`. `IFreqaiModel` declares three abstract methods: `train`, `fit`, and `predict`. we implement these methods in three levels of hierarchy.
From top to bottom:
1. `BasePyTorchModel` - Implements the `train` method. all `BasePyTorch*` inherit it. responsible for general data preparation (e.g., data normalization) and calling the `fit` method. Sets `device` attribute used by children classes. Sets `model_type` attribute used by the parent class.
2. `BasePyTorch*` - Implements the `predict` method. Here, the `*` represents a group of algorithms, such as classifiers or regressors. responsible for data preprocessing, predicting, and postprocessing if needed.
3. `PyTorch*Classifier` / `PyTorch*Regressor` - implements the `fit` method. responsible for the main train flaw, where we initialize the trainer and model objects.
![image](assets/freqai_pytorch-diagram.png)
#### Full example
Building a PyTorch regressor using MLP (multilayer perceptron) model, MSELoss criterion, and AdamW optimizer.
```python
class PyTorchMLPRegressor(BasePyTorchRegressor):
def __init__(self, **kwargs) -> None:
super().__init__(**kwargs)
config = self.freqai_info.get("model_training_parameters", {})
self.learning_rate: float = config.get("learning_rate", 3e-4)
self.model_kwargs: Dict[str, Any] = config.get("model_kwargs", {})
self.trainer_kwargs: Dict[str, Any] = config.get("trainer_kwargs", {})
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
n_features = data_dictionary["train_features"].shape[-1]
model = PyTorchMLPModel(
input_dim=n_features,
output_dim=1,
**self.model_kwargs
)
model.to(self.device)
optimizer = torch.optim.AdamW(model.parameters(), lr=self.learning_rate)
criterion = torch.nn.MSELoss()
init_model = self.get_init_model(dk.pair)
trainer = PyTorchModelTrainer(
model=model,
optimizer=optimizer,
criterion=criterion,
device=self.device,
init_model=init_model,
target_tensor_type=torch.float,
**self.trainer_kwargs,
)
trainer.fit(data_dictionary)
return trainer
```
Here we create a `PyTorchMLPRegressor` class that implements the `fit` method. The `fit` method specifies the training building blocks: model, optimizer, criterion, and trainer. We inherit both `BasePyTorchRegressor` and `BasePyTorchModel`, where the former implements the `predict` method that is suitable for our regression task, and the latter implements the train method.
??? Note "Setting Class Names for Classifiers"
When using classifiers, the user must declare the class names (or targets) by overriding the `IFreqaiModel.class_names` attribute. This is achieved by setting `self.freqai.class_names` in the FreqAI strategy inside the `set_freqai_targets` method.
For example, if you are using a binary classifier to predict price movements as up or down, you can set the class names as follows:
```python
def set_freqai_targets(self, dataframe: DataFrame, metadata: Dict, **kwargs):
self.freqai.class_names = ["down", "up"]
dataframe['&s-up_or_down'] = np.where(dataframe["close"].shift(-100) >
dataframe["close"], 'up', 'down')
return dataframe
```
To see a full example, you can refer to the [classifier test strategy class](https://github.com/freqtrade/freqtrade/blob/develop/tests/strategy/strats/freqai_test_classifier.py).

View File

@ -86,6 +86,27 @@ Mandatory parameters are marked as **Required** and have to be set in one of the
| `randomize_starting_position` | Randomize the starting point of each episode to avoid overfitting. <br> **Datatype:** bool. <br> Default: `False`. | `randomize_starting_position` | Randomize the starting point of each episode to avoid overfitting. <br> **Datatype:** bool. <br> Default: `False`.
| `drop_ohlc_from_features` | Do not include the normalized ohlc data in the feature set passed to the agent during training (ohlc will still be used for driving the environment in all cases) <br> **Datatype:** Boolean. <br> **Default:** `False` | `drop_ohlc_from_features` | Do not include the normalized ohlc data in the feature set passed to the agent during training (ohlc will still be used for driving the environment in all cases) <br> **Datatype:** Boolean. <br> **Default:** `False`
### PyTorch parameters
#### general
| Parameter | Description |
|------------|-------------|
| | **Model training parameters within the `freqai.model_training_parameters` sub dictionary**
| `learning_rate` | Learning rate to be passed to the optimizer. <br> **Datatype:** float. <br> Default: `3e-4`.
| `model_kwargs` | Parameters to be passed to the model class. <br> **Datatype:** dict. <br> Default: `{}`.
| `trainer_kwargs` | Parameters to be passed to the trainer class. <br> **Datatype:** dict. <br> Default: `{}`.
#### trainer_kwargs
| Parameter | Description |
|------------|-------------|
| | **Model training parameters within the `freqai.model_training_parameters.model_kwargs` sub dictionary**
| `max_iters` | The number of training iterations to run. iteration here refers to the number of times we call self.optimizer.step(). used to calculate n_epochs. <br> **Datatype:** int. <br> Default: `100`.
| `batch_size` | The size of the batches to use during training.. <br> **Datatype:** int. <br> Default: `64`.
| `max_n_eval_batches` | The maximum number batches to use for evaluation.. <br> **Datatype:** int, optional. <br> Default: `None`.
### Additional parameters ### Additional parameters
| Parameter | Description | | Parameter | Description |

View File

@ -1,6 +1,6 @@
markdown==3.3.7 markdown==3.3.7
mkdocs==1.4.2 mkdocs==1.4.2
mkdocs-material==9.1.5 mkdocs-material==9.1.6
mdx_truly_sane_lists==1.3 mdx_truly_sane_lists==1.3
pymdown-extensions==9.10 pymdown-extensions==9.11
jinja2==3.1.2 jinja2==3.1.2

View File

@ -9,9 +9,6 @@ This same command can also be used to update freqUI, should there be a new relea
Once the bot is started in trade / dry-run mode (with `freqtrade trade`) - the UI will be available under the configured port below (usually `http://127.0.0.1:8080`). Once the bot is started in trade / dry-run mode (with `freqtrade trade`) - the UI will be available under the configured port below (usually `http://127.0.0.1:8080`).
!!! info "Alpha release"
FreqUI is still considered an alpha release - if you encounter bugs or inconsistencies please open a [FreqUI issue](https://github.com/freqtrade/frequi/issues/new/choose).
!!! Note "developers" !!! Note "developers"
Developers should not use this method, but instead use the method described in the [freqUI repository](https://github.com/freqtrade/frequi) to get the source-code of freqUI. Developers should not use this method, but instead use the method described in the [freqUI repository](https://github.com/freqtrade/frequi) to get the source-code of freqUI.

View File

@ -23,10 +23,22 @@ These modes can be configured with these values:
'stoploss_on_exchange_limit_ratio': 0.99 'stoploss_on_exchange_limit_ratio': 0.99
``` ```
!!! Note Stoploss on exchange is only supported for the following exchanges, and not all exchanges support both stop-limit and stop-market.
Stoploss on exchange is only supported for Binance (stop-loss-limit), Huobi (stop-limit), Kraken (stop-loss-market, stop-loss-limit), Gate (stop-limit), and Kucoin (stop-limit and stop-market) as of now. The Order-type will be ignored if only one mode is available.
<ins>Do not set too low/tight stoploss value if using stop loss on exchange!</ins>
If set to low/tight then you have greater risk of missing fill on the order and stoploss will not work. | Exchange | stop-loss type |
|----------|-------------|
| Binance | limit |
| Binance Futures | market, limit |
| Huobi | limit |
| kraken | market, limit |
| Gate | limit |
| Okx | limit |
| Kucoin | stop-limit, stop-market|
!!! Note "Tight stoploss"
<ins>Do not set too low/tight stoploss value when using stop loss on exchange!</ins>
If set to low/tight you will have greater risk of missing fill on the order and stoploss will not work.
### stoploss_on_exchange and stoploss_on_exchange_limit_ratio ### stoploss_on_exchange and stoploss_on_exchange_limit_ratio

View File

@ -279,6 +279,7 @@ Return a summary of your profit/loss and performance.
> ∙ `33.095 EUR` > ∙ `33.095 EUR`
> >
> **Total Trade Count:** `138` > **Total Trade Count:** `138`
> **Bot started:** `2022-07-11 18:40:44`
> **First Trade opened:** `3 days ago` > **First Trade opened:** `3 days ago`
> **Latest Trade opened:** `2 minutes ago` > **Latest Trade opened:** `2 minutes ago`
> **Avg. Duration:** `2:33:45` > **Avg. Duration:** `2:33:45`
@ -292,6 +293,7 @@ The relative profit of `15.2 Σ%` is be based on the starting capital - so in th
Starting capital is either taken from the `available_capital` setting, or calculated by using current wallet size - profits. Starting capital is either taken from the `available_capital` setting, or calculated by using current wallet size - profits.
Profit Factor is calculated as gross profits / gross losses - and should serve as an overall metric for the strategy. Profit Factor is calculated as gross profits / gross losses - and should serve as an overall metric for the strategy.
Max drawdown corresponds to the backtesting metric `Absolute Drawdown (Account)` - calculated as `(Absolute Drawdown) / (DrawdownHigh + startingBalance)`. Max drawdown corresponds to the backtesting metric `Absolute Drawdown (Account)` - calculated as `(Absolute Drawdown) / (DrawdownHigh + startingBalance)`.
Bot started date will refer to the date the bot was first started. For older bots, this will default to the first trade's open date.
### /forceexit <trade_id> ### /forceexit <trade_id>

View File

@ -116,7 +116,7 @@ class TimeRange:
:param text: value from --timerange :param text: value from --timerange
:return: Start and End range period :return: Start and End range period
""" """
if text is None: if not text:
return TimeRange(None, None, 0, 0) return TimeRange(None, None, 0, 0)
syntax = [(r'^-(\d{8})$', (None, 'date')), syntax = [(r'^-(\d{8})$', (None, 'date')),
(r'^(\d{8})-$', ('date', None)), (r'^(\d{8})-$', ('date', None)),

View File

@ -64,6 +64,7 @@ USERPATH_FREQAIMODELS = 'freqaimodels'
TELEGRAM_SETTING_OPTIONS = ['on', 'off', 'silent'] TELEGRAM_SETTING_OPTIONS = ['on', 'off', 'silent']
WEBHOOK_FORMAT_OPTIONS = ['form', 'json', 'raw'] WEBHOOK_FORMAT_OPTIONS = ['form', 'json', 'raw']
FULL_DATAFRAME_THRESHOLD = 100 FULL_DATAFRAME_THRESHOLD = 100
CUSTOM_TAG_MAX_LENGTH = 255
ENV_VAR_PREFIX = 'FREQTRADE__' ENV_VAR_PREFIX = 'FREQTRADE__'

View File

@ -246,14 +246,8 @@ def _load_backtest_data_df_compatibility(df: pd.DataFrame) -> pd.DataFrame:
""" """
Compatibility support for older backtest data. Compatibility support for older backtest data.
""" """
df['open_date'] = pd.to_datetime(df['open_date'], df['open_date'] = pd.to_datetime(df['open_date'], utc=True)
utc=True, df['close_date'] = pd.to_datetime(df['close_date'], utc=True)
infer_datetime_format=True
)
df['close_date'] = pd.to_datetime(df['close_date'],
utc=True,
infer_datetime_format=True
)
# Compatibility support for pre short Columns # Compatibility support for pre short Columns
if 'is_short' not in df.columns: if 'is_short' not in df.columns:
df['is_short'] = False df['is_short'] = False

View File

@ -34,7 +34,7 @@ def ohlcv_to_dataframe(ohlcv: list, timeframe: str, pair: str, *,
cols = DEFAULT_DATAFRAME_COLUMNS cols = DEFAULT_DATAFRAME_COLUMNS
df = DataFrame(ohlcv, columns=cols) df = DataFrame(ohlcv, columns=cols)
df['date'] = to_datetime(df['date'], unit='ms', utc=True, infer_datetime_format=True) df['date'] = to_datetime(df['date'], unit='ms', utc=True)
# Some exchanges return int values for Volume and even for OHLC. # Some exchanges return int values for Volume and even for OHLC.
# Convert them since TA-LIB indicators used in the strategy assume floats # Convert them since TA-LIB indicators used in the strategy assume floats

View File

@ -63,10 +63,7 @@ class FeatherDataHandler(IDataHandler):
pairdata.columns = self._columns pairdata.columns = self._columns
pairdata = pairdata.astype(dtype={'open': 'float', 'high': 'float', pairdata = pairdata.astype(dtype={'open': 'float', 'high': 'float',
'low': 'float', 'close': 'float', 'volume': 'float'}) 'low': 'float', 'close': 'float', 'volume': 'float'})
pairdata['date'] = to_datetime(pairdata['date'], pairdata['date'] = to_datetime(pairdata['date'], unit='ms', utc=True)
unit='ms',
utc=True,
infer_datetime_format=True)
return pairdata return pairdata
def ohlcv_append( def ohlcv_append(

View File

@ -75,10 +75,7 @@ class JsonDataHandler(IDataHandler):
return DataFrame(columns=self._columns) return DataFrame(columns=self._columns)
pairdata = pairdata.astype(dtype={'open': 'float', 'high': 'float', pairdata = pairdata.astype(dtype={'open': 'float', 'high': 'float',
'low': 'float', 'close': 'float', 'volume': 'float'}) 'low': 'float', 'close': 'float', 'volume': 'float'})
pairdata['date'] = to_datetime(pairdata['date'], pairdata['date'] = to_datetime(pairdata['date'], unit='ms', utc=True)
unit='ms',
utc=True,
infer_datetime_format=True)
return pairdata return pairdata
def ohlcv_append( def ohlcv_append(

View File

@ -62,10 +62,7 @@ class ParquetDataHandler(IDataHandler):
pairdata.columns = self._columns pairdata.columns = self._columns
pairdata = pairdata.astype(dtype={'open': 'float', 'high': 'float', pairdata = pairdata.astype(dtype={'open': 'float', 'high': 'float',
'low': 'float', 'close': 'float', 'volume': 'float'}) 'low': 'float', 'close': 'float', 'volume': 'float'})
pairdata['date'] = to_datetime(pairdata['date'], pairdata['date'] = to_datetime(pairdata['date'], unit='ms', utc=True)
unit='ms',
utc=True,
infer_datetime_format=True)
return pairdata return pairdata
def ohlcv_append( def ohlcv_append(

View File

@ -0,0 +1,147 @@
import logging
from typing import Dict, List, Tuple
import numpy as np
import numpy.typing as npt
import pandas as pd
import torch
from pandas import DataFrame
from torch.nn import functional as F
from freqtrade.exceptions import OperationalException
from freqtrade.freqai.base_models.BasePyTorchModel import BasePyTorchModel
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
logger = logging.getLogger(__name__)
class BasePyTorchClassifier(BasePyTorchModel):
"""
A PyTorch implementation of a classifier.
User must implement fit method
Important!
- User must declare the target class names in the strategy,
under IStrategy.set_freqai_targets method.
for example, in your strategy:
```
def set_freqai_targets(self, dataframe: DataFrame, metadata: Dict, **kwargs):
self.freqai.class_names = ["down", "up"]
dataframe['&s-up_or_down'] = np.where(dataframe["close"].shift(-100) >
dataframe["close"], 'up', 'down')
return dataframe
"""
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.class_name_to_index = None
self.index_to_class_name = None
def predict(
self, unfiltered_df: DataFrame, dk: FreqaiDataKitchen, **kwargs
) -> Tuple[DataFrame, npt.NDArray[np.int_]]:
"""
Filter the prediction features data and predict with it.
:param unfiltered_df: Full dataframe for the current backtest period.
:return:
:pred_df: dataframe containing the predictions
:do_predict: np.array of 1s and 0s to indicate places where freqai needed to remove
data (NaNs) or felt uncertain about data (PCA and DI index)
:raises ValueError: if 'class_names' doesn't exist in model meta_data.
"""
class_names = self.model.model_meta_data.get("class_names", None)
if not class_names:
raise ValueError(
"Missing class names. "
"self.model.model_meta_data['class_names'] is None."
)
if not self.class_name_to_index:
self.init_class_names_to_index_mapping(class_names)
dk.find_features(unfiltered_df)
filtered_df, _ = dk.filter_features(
unfiltered_df, dk.training_features_list, training_filter=False
)
filtered_df = dk.normalize_data_from_metadata(filtered_df)
dk.data_dictionary["prediction_features"] = filtered_df
self.data_cleaning_predict(dk)
x = self.data_convertor.convert_x(
dk.data_dictionary["prediction_features"],
device=self.device
)
logits = self.model.model(x)
probs = F.softmax(logits, dim=-1)
predicted_classes = torch.argmax(probs, dim=-1)
predicted_classes_str = self.decode_class_names(predicted_classes)
pred_df_prob = DataFrame(probs.detach().numpy(), columns=class_names)
pred_df = DataFrame(predicted_classes_str, columns=[dk.label_list[0]])
pred_df = pd.concat([pred_df, pred_df_prob], axis=1)
return (pred_df, dk.do_predict)
def encode_class_names(
self,
data_dictionary: Dict[str, pd.DataFrame],
dk: FreqaiDataKitchen,
class_names: List[str],
):
"""
encode class name, str -> int
assuming first column of *_labels data frame to be the target column
containing the class names
"""
target_column_name = dk.label_list[0]
for split in self.splits:
label_df = data_dictionary[f"{split}_labels"]
self.assert_valid_class_names(label_df[target_column_name], class_names)
label_df[target_column_name] = list(
map(lambda x: self.class_name_to_index[x], label_df[target_column_name])
)
@staticmethod
def assert_valid_class_names(
target_column: pd.Series,
class_names: List[str]
):
non_defined_labels = set(target_column) - set(class_names)
if len(non_defined_labels) != 0:
raise OperationalException(
f"Found non defined labels: {non_defined_labels}, ",
f"expecting labels: {class_names}"
)
def decode_class_names(self, class_ints: torch.Tensor) -> List[str]:
"""
decode class name, int -> str
"""
return list(map(lambda x: self.index_to_class_name[x.item()], class_ints))
def init_class_names_to_index_mapping(self, class_names):
self.class_name_to_index = {s: i for i, s in enumerate(class_names)}
self.index_to_class_name = {i: s for i, s in enumerate(class_names)}
logger.info(f"encoded class name to index: {self.class_name_to_index}")
def convert_label_column_to_int(
self,
data_dictionary: Dict[str, pd.DataFrame],
dk: FreqaiDataKitchen,
class_names: List[str]
):
self.init_class_names_to_index_mapping(class_names)
self.encode_class_names(data_dictionary, dk, class_names)
def get_class_names(self) -> List[str]:
if not self.class_names:
raise ValueError(
"self.class_names is empty, "
"set self.freqai.class_names = ['class a', 'class b', 'class c'] "
"inside IStrategy.set_freqai_targets method."
)
return self.class_names

View File

@ -0,0 +1,83 @@
import logging
from abc import ABC, abstractmethod
from time import time
from typing import Any
import torch
from pandas import DataFrame
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
from freqtrade.freqai.freqai_interface import IFreqaiModel
from freqtrade.freqai.torch.PyTorchDataConvertor import PyTorchDataConvertor
logger = logging.getLogger(__name__)
class BasePyTorchModel(IFreqaiModel, ABC):
"""
Base class for PyTorch type models.
User *must* inherit from this class and set fit() and predict() and
data_convertor property.
"""
def __init__(self, **kwargs):
super().__init__(config=kwargs["config"])
self.dd.model_type = "pytorch"
self.device = "cuda" if torch.cuda.is_available() else "cpu"
test_size = self.freqai_info.get('data_split_parameters', {}).get('test_size')
self.splits = ["train", "test"] if test_size != 0 else ["train"]
def train(
self, unfiltered_df: DataFrame, pair: str, dk: FreqaiDataKitchen, **kwargs
) -> Any:
"""
Filter the training data and train a model to it. Train makes heavy use of the datakitchen
for storing, saving, loading, and analyzing the data.
:param unfiltered_df: Full dataframe for the current training period
:return:
:model: Trained model which can be used to inference (self.predict)
"""
logger.info(f"-------------------- Starting training {pair} --------------------")
start_time = time()
features_filtered, labels_filtered = dk.filter_features(
unfiltered_df,
dk.training_features_list,
dk.label_list,
training_filter=True,
)
# split data into train/test data.
data_dictionary = dk.make_train_test_datasets(features_filtered, labels_filtered)
if not self.freqai_info.get("fit_live_predictions", 0) or not self.live:
dk.fit_labels()
# normalize all data based on train_dataset only
data_dictionary = dk.normalize_data(data_dictionary)
# optional additional data cleaning/analysis
self.data_cleaning_train(dk)
logger.info(
f"Training model on {len(dk.data_dictionary['train_features'].columns)} features"
)
logger.info(f"Training model on {len(data_dictionary['train_features'])} data points")
model = self.fit(data_dictionary, dk)
end_time = time()
logger.info(f"-------------------- Done training {pair} "
f"({end_time - start_time:.2f} secs) --------------------")
return model
@property
@abstractmethod
def data_convertor(self) -> PyTorchDataConvertor:
"""
a class responsible for converting `*_features` & `*_labels` pandas dataframes
to pytorch tensors.
"""
raise NotImplementedError("Abstract property")

View File

@ -0,0 +1,49 @@
import logging
from typing import Tuple
import numpy as np
import numpy.typing as npt
from pandas import DataFrame
from freqtrade.freqai.base_models.BasePyTorchModel import BasePyTorchModel
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
logger = logging.getLogger(__name__)
class BasePyTorchRegressor(BasePyTorchModel):
"""
A PyTorch implementation of a regressor.
User must implement fit method
"""
def __init__(self, **kwargs):
super().__init__(**kwargs)
def predict(
self, unfiltered_df: DataFrame, dk: FreqaiDataKitchen, **kwargs
) -> Tuple[DataFrame, npt.NDArray[np.int_]]:
"""
Filter the prediction features data and predict with it.
:param unfiltered_df: Full dataframe for the current backtest period.
:return:
:pred_df: dataframe containing the predictions
:do_predict: np.array of 1s and 0s to indicate places where freqai needed to remove
data (NaNs) or felt uncertain about data (PCA and DI index)
"""
dk.find_features(unfiltered_df)
filtered_df, _ = dk.filter_features(
unfiltered_df, dk.training_features_list, training_filter=False
)
filtered_df = dk.normalize_data_from_metadata(filtered_df)
dk.data_dictionary["prediction_features"] = filtered_df
self.data_cleaning_predict(dk)
x = self.data_convertor.convert_x(
dk.data_dictionary["prediction_features"],
device=self.device
)
y = self.model.model(x)
pred_df = DataFrame(y.detach().numpy(), columns=[dk.label_list[0]])
return (pred_df, dk.do_predict)

View File

@ -446,7 +446,7 @@ class FreqaiDataDrawer:
dump(model, save_path / f"{dk.model_filename}_model.joblib") dump(model, save_path / f"{dk.model_filename}_model.joblib")
elif self.model_type == 'keras': elif self.model_type == 'keras':
model.save(save_path / f"{dk.model_filename}_model.h5") model.save(save_path / f"{dk.model_filename}_model.h5")
elif 'stable_baselines' in self.model_type or 'sb3_contrib' == self.model_type: elif self.model_type in ["stable_baselines3", "sb3_contrib", "pytorch"]:
model.save(save_path / f"{dk.model_filename}_model.zip") model.save(save_path / f"{dk.model_filename}_model.zip")
if dk.svm_model is not None: if dk.svm_model is not None:
@ -496,7 +496,7 @@ class FreqaiDataDrawer:
dk.training_features_list = dk.data["training_features_list"] dk.training_features_list = dk.data["training_features_list"]
dk.label_list = dk.data["label_list"] dk.label_list = dk.data["label_list"]
def load_data(self, coin: str, dk: FreqaiDataKitchen) -> Any: def load_data(self, coin: str, dk: FreqaiDataKitchen) -> Any: # noqa: C901
""" """
loads all data required to make a prediction on a sub-train time range loads all data required to make a prediction on a sub-train time range
:returns: :returns:
@ -537,6 +537,11 @@ class FreqaiDataDrawer:
self.model_type, self.freqai_info['rl_config']['model_type']) self.model_type, self.freqai_info['rl_config']['model_type'])
MODELCLASS = getattr(mod, self.freqai_info['rl_config']['model_type']) MODELCLASS = getattr(mod, self.freqai_info['rl_config']['model_type'])
model = MODELCLASS.load(dk.data_path / f"{dk.model_filename}_model") model = MODELCLASS.load(dk.data_path / f"{dk.model_filename}_model")
elif self.model_type == 'pytorch':
import torch
zip = torch.load(dk.data_path / f"{dk.model_filename}_model.zip")
model = zip["pytrainer"]
model = model.load_from_checkpoint(zip)
if Path(dk.data_path / f"{dk.model_filename}_svm_model.joblib").is_file(): if Path(dk.data_path / f"{dk.model_filename}_svm_model.joblib").is_file():
dk.svm_model = load(dk.data_path / f"{dk.model_filename}_svm_model.joblib") dk.svm_model = load(dk.data_path / f"{dk.model_filename}_svm_model.joblib")

View File

@ -83,6 +83,7 @@ class IFreqaiModel(ABC):
self.CONV_WIDTH = self.freqai_info.get('conv_width', 1) self.CONV_WIDTH = self.freqai_info.get('conv_width', 1)
if self.ft_params.get("inlier_metric_window", 0): if self.ft_params.get("inlier_metric_window", 0):
self.CONV_WIDTH = self.ft_params.get("inlier_metric_window", 0) * 2 self.CONV_WIDTH = self.ft_params.get("inlier_metric_window", 0) * 2
self.class_names: List[str] = [] # used in classification subclasses
self.pair_it = 0 self.pair_it = 0
self.pair_it_train = 0 self.pair_it_train = 0
self.total_pairs = len(self.config.get("exchange", {}).get("pair_whitelist")) self.total_pairs = len(self.config.get("exchange", {}).get("pair_whitelist"))
@ -571,8 +572,9 @@ class IFreqaiModel(ABC):
file_type = ".joblib" file_type = ".joblib"
elif self.dd.model_type == 'keras': elif self.dd.model_type == 'keras':
file_type = ".h5" file_type = ".h5"
elif 'stable_baselines' in self.dd.model_type or 'sb3_contrib' == self.dd.model_type: elif self.dd.model_type in ["stable_baselines3", "sb3_contrib", "pytorch"]:
file_type = ".zip" file_type = ".zip"
path_to_modelfile = Path(dk.data_path / f"{dk.model_filename}_model{file_type}") path_to_modelfile = Path(dk.data_path / f"{dk.model_filename}_model{file_type}")
file_exists = path_to_modelfile.is_file() file_exists = path_to_modelfile.is_file()
if file_exists: if file_exists:

View File

@ -14,16 +14,20 @@ logger = logging.getLogger(__name__)
class CatboostClassifier(BaseClassifierModel): class CatboostClassifier(BaseClassifierModel):
""" """
User created prediction model. The class needs to override three necessary User created prediction model. The class inherits IFreqaiModel, which
functions, predict(), train(), fit(). The class inherits ModelHandler which means it has full access to all Frequency AI functionality. Typically,
has its own DataHandler where data is held, saved, loaded, and managed. users would use this to override the common `fit()`, `train()`, or
`predict()` methods to add their custom data handling tools or change
various aspects of the training that cannot be configured via the
top level config.json file.
""" """
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any: def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
""" """
User sets up the training and test data to fit their desired model here User sets up the training and test data to fit their desired model here
:param data_dictionary: the dictionary constructed by DataHandler to hold :param data_dictionary: the dictionary holding all data for train, test,
all the training and test data/labels. labels, weights
:param dk: The datakitchen object for the current coin/model
""" """
train_data = Pool( train_data = Pool(

View File

@ -15,16 +15,20 @@ logger = logging.getLogger(__name__)
class CatboostClassifierMultiTarget(BaseClassifierModel): class CatboostClassifierMultiTarget(BaseClassifierModel):
""" """
User created prediction model. The class needs to override three necessary User created prediction model. The class inherits IFreqaiModel, which
functions, predict(), train(), fit(). The class inherits ModelHandler which means it has full access to all Frequency AI functionality. Typically,
has its own DataHandler where data is held, saved, loaded, and managed. users would use this to override the common `fit()`, `train()`, or
`predict()` methods to add their custom data handling tools or change
various aspects of the training that cannot be configured via the
top level config.json file.
""" """
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any: def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
""" """
User sets up the training and test data to fit their desired model here User sets up the training and test data to fit their desired model here
:param data_dictionary: the dictionary constructed by DataHandler to hold :param data_dictionary: the dictionary holding all data for train, test,
all the training and test data/labels. labels, weights
:param dk: The datakitchen object for the current coin/model
""" """
cbc = CatBoostClassifier( cbc = CatBoostClassifier(

View File

@ -14,16 +14,20 @@ logger = logging.getLogger(__name__)
class CatboostRegressor(BaseRegressionModel): class CatboostRegressor(BaseRegressionModel):
""" """
User created prediction model. The class needs to override three necessary User created prediction model. The class inherits IFreqaiModel, which
functions, predict(), train(), fit(). The class inherits ModelHandler which means it has full access to all Frequency AI functionality. Typically,
has its own DataHandler where data is held, saved, loaded, and managed. users would use this to override the common `fit()`, `train()`, or
`predict()` methods to add their custom data handling tools or change
various aspects of the training that cannot be configured via the
top level config.json file.
""" """
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any: def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
""" """
User sets up the training and test data to fit their desired model here User sets up the training and test data to fit their desired model here
:param data_dictionary: the dictionary constructed by DataHandler to hold :param data_dictionary: the dictionary holding all data for train, test,
all the training and test data/labels. labels, weights
:param dk: The datakitchen object for the current coin/model
""" """
train_data = Pool( train_data = Pool(

View File

@ -15,16 +15,20 @@ logger = logging.getLogger(__name__)
class CatboostRegressorMultiTarget(BaseRegressionModel): class CatboostRegressorMultiTarget(BaseRegressionModel):
""" """
User created prediction model. The class needs to override three necessary User created prediction model. The class inherits IFreqaiModel, which
functions, predict(), train(), fit(). The class inherits ModelHandler which means it has full access to all Frequency AI functionality. Typically,
has its own DataHandler where data is held, saved, loaded, and managed. users would use this to override the common `fit()`, `train()`, or
`predict()` methods to add their custom data handling tools or change
various aspects of the training that cannot be configured via the
top level config.json file.
""" """
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any: def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
""" """
User sets up the training and test data to fit their desired model here User sets up the training and test data to fit their desired model here
:param data_dictionary: the dictionary constructed by DataHandler to hold :param data_dictionary: the dictionary holding all data for train, test,
all the training and test data/labels. labels, weights
:param dk: The datakitchen object for the current coin/model
""" """
cbr = CatBoostRegressor( cbr = CatBoostRegressor(

View File

@ -12,16 +12,20 @@ logger = logging.getLogger(__name__)
class LightGBMClassifier(BaseClassifierModel): class LightGBMClassifier(BaseClassifierModel):
""" """
User created prediction model. The class needs to override three necessary User created prediction model. The class inherits IFreqaiModel, which
functions, predict(), train(), fit(). The class inherits ModelHandler which means it has full access to all Frequency AI functionality. Typically,
has its own DataHandler where data is held, saved, loaded, and managed. users would use this to override the common `fit()`, `train()`, or
`predict()` methods to add their custom data handling tools or change
various aspects of the training that cannot be configured via the
top level config.json file.
""" """
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any: def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
""" """
User sets up the training and test data to fit their desired model here User sets up the training and test data to fit their desired model here
:param data_dictionary: the dictionary constructed by DataHandler to hold :param data_dictionary: the dictionary holding all data for train, test,
all the training and test data/labels. labels, weights
:param dk: The datakitchen object for the current coin/model
""" """
if self.freqai_info.get('data_split_parameters', {}).get('test_size', 0.1) == 0: if self.freqai_info.get('data_split_parameters', {}).get('test_size', 0.1) == 0:

View File

@ -13,16 +13,20 @@ logger = logging.getLogger(__name__)
class LightGBMClassifierMultiTarget(BaseClassifierModel): class LightGBMClassifierMultiTarget(BaseClassifierModel):
""" """
User created prediction model. The class needs to override three necessary User created prediction model. The class inherits IFreqaiModel, which
functions, predict(), train(), fit(). The class inherits ModelHandler which means it has full access to all Frequency AI functionality. Typically,
has its own DataHandler where data is held, saved, loaded, and managed. users would use this to override the common `fit()`, `train()`, or
`predict()` methods to add their custom data handling tools or change
various aspects of the training that cannot be configured via the
top level config.json file.
""" """
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any: def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
""" """
User sets up the training and test data to fit their desired model here User sets up the training and test data to fit their desired model here
:param data_dictionary: the dictionary constructed by DataHandler to hold :param data_dictionary: the dictionary holding all data for train, test,
all the training and test data/labels. labels, weights
:param dk: The datakitchen object for the current coin/model
""" """
lgb = LGBMClassifier(**self.model_training_parameters) lgb = LGBMClassifier(**self.model_training_parameters)

View File

@ -12,18 +12,20 @@ logger = logging.getLogger(__name__)
class LightGBMRegressor(BaseRegressionModel): class LightGBMRegressor(BaseRegressionModel):
""" """
User created prediction model. The class needs to override three necessary User created prediction model. The class inherits IFreqaiModel, which
functions, predict(), train(), fit(). The class inherits ModelHandler which means it has full access to all Frequency AI functionality. Typically,
has its own DataHandler where data is held, saved, loaded, and managed. users would use this to override the common `fit()`, `train()`, or
`predict()` methods to add their custom data handling tools or change
various aspects of the training that cannot be configured via the
top level config.json file.
""" """
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any: def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
""" """
Most regressors use the same function names and arguments e.g. user User sets up the training and test data to fit their desired model here
can drop in LGBMRegressor in place of CatBoostRegressor and all data :param data_dictionary: the dictionary holding all data for train, test,
management will be properly handled by Freqai. labels, weights
:param data_dictionary: the dictionary constructed by DataHandler to hold :param dk: The datakitchen object for the current coin/model
all the training and test data/labels.
""" """
if self.freqai_info.get('data_split_parameters', {}).get('test_size', 0.1) == 0: if self.freqai_info.get('data_split_parameters', {}).get('test_size', 0.1) == 0:

View File

@ -13,16 +13,20 @@ logger = logging.getLogger(__name__)
class LightGBMRegressorMultiTarget(BaseRegressionModel): class LightGBMRegressorMultiTarget(BaseRegressionModel):
""" """
User created prediction model. The class needs to override three necessary User created prediction model. The class inherits IFreqaiModel, which
functions, predict(), train(), fit(). The class inherits ModelHandler which means it has full access to all Frequency AI functionality. Typically,
has its own DataHandler where data is held, saved, loaded, and managed. users would use this to override the common `fit()`, `train()`, or
`predict()` methods to add their custom data handling tools or change
various aspects of the training that cannot be configured via the
top level config.json file.
""" """
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any: def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
""" """
User sets up the training and test data to fit their desired model here User sets up the training and test data to fit their desired model here
:param data_dictionary: the dictionary constructed by DataHandler to hold :param data_dictionary: the dictionary holding all data for train, test,
all the training and test data/labels. labels, weights
:param dk: The datakitchen object for the current coin/model
""" """
lgb = LGBMRegressor(**self.model_training_parameters) lgb = LGBMRegressor(**self.model_training_parameters)

View File

@ -0,0 +1,89 @@
from typing import Any, Dict
import torch
from freqtrade.freqai.base_models.BasePyTorchClassifier import BasePyTorchClassifier
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
from freqtrade.freqai.torch.PyTorchDataConvertor import (DefaultPyTorchDataConvertor,
PyTorchDataConvertor)
from freqtrade.freqai.torch.PyTorchMLPModel import PyTorchMLPModel
from freqtrade.freqai.torch.PyTorchModelTrainer import PyTorchModelTrainer
class PyTorchMLPClassifier(BasePyTorchClassifier):
"""
This class implements the fit method of IFreqaiModel.
in the fit method we initialize the model and trainer objects.
the only requirement from the model is to be aligned to PyTorchClassifier
predict method that expects the model to predict a tensor of type long.
parameters are passed via `model_training_parameters` under the freqai
section in the config file. e.g:
{
...
"freqai": {
...
"model_training_parameters" : {
"learning_rate": 3e-4,
"trainer_kwargs": {
"max_iters": 5000,
"batch_size": 64,
"max_n_eval_batches": null,
},
"model_kwargs": {
"hidden_dim": 512,
"dropout_percent": 0.2,
"n_layer": 1,
},
}
}
}
"""
@property
def data_convertor(self) -> PyTorchDataConvertor:
return DefaultPyTorchDataConvertor(
target_tensor_type=torch.long,
squeeze_target_tensor=True
)
def __init__(self, **kwargs) -> None:
super().__init__(**kwargs)
config = self.freqai_info.get("model_training_parameters", {})
self.learning_rate: float = config.get("learning_rate", 3e-4)
self.model_kwargs: Dict[str, Any] = config.get("model_kwargs", {})
self.trainer_kwargs: Dict[str, Any] = config.get("trainer_kwargs", {})
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
"""
User sets up the training and test data to fit their desired model here
:param data_dictionary: the dictionary holding all data for train, test,
labels, weights
:param dk: The datakitchen object for the current coin/model
:raises ValueError: If self.class_names is not defined in the parent class.
"""
class_names = self.get_class_names()
self.convert_label_column_to_int(data_dictionary, dk, class_names)
n_features = data_dictionary["train_features"].shape[-1]
model = PyTorchMLPModel(
input_dim=n_features,
output_dim=len(class_names),
**self.model_kwargs
)
model.to(self.device)
optimizer = torch.optim.AdamW(model.parameters(), lr=self.learning_rate)
criterion = torch.nn.CrossEntropyLoss()
init_model = self.get_init_model(dk.pair)
trainer = PyTorchModelTrainer(
model=model,
optimizer=optimizer,
criterion=criterion,
model_meta_data={"class_names": class_names},
device=self.device,
init_model=init_model,
data_convertor=self.data_convertor,
**self.trainer_kwargs,
)
trainer.fit(data_dictionary, self.splits)
return trainer

View File

@ -0,0 +1,83 @@
from typing import Any, Dict
import torch
from freqtrade.freqai.base_models.BasePyTorchRegressor import BasePyTorchRegressor
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
from freqtrade.freqai.torch.PyTorchDataConvertor import (DefaultPyTorchDataConvertor,
PyTorchDataConvertor)
from freqtrade.freqai.torch.PyTorchMLPModel import PyTorchMLPModel
from freqtrade.freqai.torch.PyTorchModelTrainer import PyTorchModelTrainer
class PyTorchMLPRegressor(BasePyTorchRegressor):
"""
This class implements the fit method of IFreqaiModel.
in the fit method we initialize the model and trainer objects.
the only requirement from the model is to be aligned to PyTorchRegressor
predict method that expects the model to predict tensor of type float.
the trainer defines the training loop.
parameters are passed via `model_training_parameters` under the freqai
section in the config file. e.g:
{
...
"freqai": {
...
"model_training_parameters" : {
"learning_rate": 3e-4,
"trainer_kwargs": {
"max_iters": 5000,
"batch_size": 64,
"max_n_eval_batches": null,
},
"model_kwargs": {
"hidden_dim": 512,
"dropout_percent": 0.2,
"n_layer": 1,
},
}
}
}
"""
@property
def data_convertor(self) -> PyTorchDataConvertor:
return DefaultPyTorchDataConvertor(target_tensor_type=torch.float)
def __init__(self, **kwargs) -> None:
super().__init__(**kwargs)
config = self.freqai_info.get("model_training_parameters", {})
self.learning_rate: float = config.get("learning_rate", 3e-4)
self.model_kwargs: Dict[str, Any] = config.get("model_kwargs", {})
self.trainer_kwargs: Dict[str, Any] = config.get("trainer_kwargs", {})
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
"""
User sets up the training and test data to fit their desired model here
:param data_dictionary: the dictionary holding all data for train, test,
labels, weights
:param dk: The datakitchen object for the current coin/model
"""
n_features = data_dictionary["train_features"].shape[-1]
model = PyTorchMLPModel(
input_dim=n_features,
output_dim=1,
**self.model_kwargs
)
model.to(self.device)
optimizer = torch.optim.AdamW(model.parameters(), lr=self.learning_rate)
criterion = torch.nn.MSELoss()
init_model = self.get_init_model(dk.pair)
trainer = PyTorchModelTrainer(
model=model,
optimizer=optimizer,
criterion=criterion,
device=self.device,
init_model=init_model,
data_convertor=self.data_convertor,
**self.trainer_kwargs,
)
trainer.fit(data_dictionary, self.splits)
return trainer

View File

@ -18,16 +18,20 @@ logger = logging.getLogger(__name__)
class XGBoostClassifier(BaseClassifierModel): class XGBoostClassifier(BaseClassifierModel):
""" """
User created prediction model. The class needs to override three necessary User created prediction model. The class inherits IFreqaiModel, which
functions, predict(), train(), fit(). The class inherits ModelHandler which means it has full access to all Frequency AI functionality. Typically,
has its own DataHandler where data is held, saved, loaded, and managed. users would use this to override the common `fit()`, `train()`, or
`predict()` methods to add their custom data handling tools or change
various aspects of the training that cannot be configured via the
top level config.json file.
""" """
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any: def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
""" """
User sets up the training and test data to fit their desired model here User sets up the training and test data to fit their desired model here
:param data_dictionary: the dictionary constructed by DataHandler to hold :param data_dictionary: the dictionary holding all data for train, test,
all the training and test data/labels. labels, weights
:param dk: The datakitchen object for the current coin/model
""" """
X = data_dictionary["train_features"].to_numpy() X = data_dictionary["train_features"].to_numpy()

View File

@ -18,16 +18,20 @@ logger = logging.getLogger(__name__)
class XGBoostRFClassifier(BaseClassifierModel): class XGBoostRFClassifier(BaseClassifierModel):
""" """
User created prediction model. The class needs to override three necessary User created prediction model. The class inherits IFreqaiModel, which
functions, predict(), train(), fit(). The class inherits ModelHandler which means it has full access to all Frequency AI functionality. Typically,
has its own DataHandler where data is held, saved, loaded, and managed. users would use this to override the common `fit()`, `train()`, or
`predict()` methods to add their custom data handling tools or change
various aspects of the training that cannot be configured via the
top level config.json file.
""" """
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any: def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
""" """
User sets up the training and test data to fit their desired model here User sets up the training and test data to fit their desired model here
:param data_dictionary: the dictionary constructed by DataHandler to hold :param data_dictionary: the dictionary holding all data for train, test,
all the training and test data/labels. labels, weights
:param dk: The datakitchen object for the current coin/model
""" """
X = data_dictionary["train_features"].to_numpy() X = data_dictionary["train_features"].to_numpy()

View File

@ -12,16 +12,20 @@ logger = logging.getLogger(__name__)
class XGBoostRFRegressor(BaseRegressionModel): class XGBoostRFRegressor(BaseRegressionModel):
""" """
User created prediction model. The class needs to override three necessary User created prediction model. The class inherits IFreqaiModel, which
functions, predict(), train(), fit(). The class inherits ModelHandler which means it has full access to all Frequency AI functionality. Typically,
has its own DataHandler where data is held, saved, loaded, and managed. users would use this to override the common `fit()`, `train()`, or
`predict()` methods to add their custom data handling tools or change
various aspects of the training that cannot be configured via the
top level config.json file.
""" """
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any: def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
""" """
User sets up the training and test data to fit their desired model here User sets up the training and test data to fit their desired model here
:param data_dictionary: the dictionary constructed by DataHandler to hold :param data_dictionary: the dictionary holding all data for train, test,
all the training and test data/labels. labels, weights
:param dk: The datakitchen object for the current coin/model
""" """
X = data_dictionary["train_features"] X = data_dictionary["train_features"]

View File

@ -12,16 +12,20 @@ logger = logging.getLogger(__name__)
class XGBoostRegressor(BaseRegressionModel): class XGBoostRegressor(BaseRegressionModel):
""" """
User created prediction model. The class needs to override three necessary User created prediction model. The class inherits IFreqaiModel, which
functions, predict(), train(), fit(). The class inherits ModelHandler which means it has full access to all Frequency AI functionality. Typically,
has its own DataHandler where data is held, saved, loaded, and managed. users would use this to override the common `fit()`, `train()`, or
`predict()` methods to add their custom data handling tools or change
various aspects of the training that cannot be configured via the
top level config.json file.
""" """
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any: def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
""" """
User sets up the training and test data to fit their desired model here User sets up the training and test data to fit their desired model here
:param data_dictionary: the dictionary constructed by DataHandler to hold :param data_dictionary: the dictionary holding all data for train, test,
all the training and test data/labels. labels, weights
:param dk: The datakitchen object for the current coin/model
""" """
X = data_dictionary["train_features"] X = data_dictionary["train_features"]

View File

@ -13,16 +13,20 @@ logger = logging.getLogger(__name__)
class XGBoostRegressorMultiTarget(BaseRegressionModel): class XGBoostRegressorMultiTarget(BaseRegressionModel):
""" """
User created prediction model. The class needs to override three necessary User created prediction model. The class inherits IFreqaiModel, which
functions, predict(), train(), fit(). The class inherits ModelHandler which means it has full access to all Frequency AI functionality. Typically,
has its own DataHandler where data is held, saved, loaded, and managed. users would use this to override the common `fit()`, `train()`, or
`predict()` methods to add their custom data handling tools or change
various aspects of the training that cannot be configured via the
top level config.json file.
""" """
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any: def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
""" """
User sets up the training and test data to fit their desired model here User sets up the training and test data to fit their desired model here
:param data_dictionary: the dictionary constructed by DataHandler to hold :param data_dictionary: the dictionary holding all data for train, test,
all the training and test data/labels. labels, weights
:param dk: The datakitchen object for the current coin/model
""" """
xgb = XGBRegressor(**self.model_training_parameters) xgb = XGBRegressor(**self.model_training_parameters)

View File

@ -0,0 +1,67 @@
from abc import ABC, abstractmethod
from typing import List, Optional
import pandas as pd
import torch
class PyTorchDataConvertor(ABC):
"""
This class is responsible for converting `*_features` & `*_labels` pandas dataframes
to pytorch tensors.
"""
@abstractmethod
def convert_x(self, df: pd.DataFrame, device: Optional[str] = None) -> List[torch.Tensor]:
"""
:param df: "*_features" dataframe.
:param device: The device to use for training (e.g. 'cpu', 'cuda').
"""
@abstractmethod
def convert_y(self, df: pd.DataFrame, device: Optional[str] = None) -> List[torch.Tensor]:
"""
:param df: "*_labels" dataframe.
:param device: The device to use for training (e.g. 'cpu', 'cuda').
"""
class DefaultPyTorchDataConvertor(PyTorchDataConvertor):
"""
A default conversion that keeps features dataframe shapes.
"""
def __init__(
self,
target_tensor_type: Optional[torch.dtype] = None,
squeeze_target_tensor: bool = False
):
"""
:param target_tensor_type: type of target tensor, for classification use
torch.long, for regressor use torch.float or torch.double.
:param squeeze_target_tensor: controls the target shape, used for loss functions
that requires 0D or 1D.
"""
self._target_tensor_type = target_tensor_type
self._squeeze_target_tensor = squeeze_target_tensor
def convert_x(self, df: pd.DataFrame, device: Optional[str] = None) -> List[torch.Tensor]:
x = torch.from_numpy(df.values).float()
if device:
x = x.to(device)
return [x]
def convert_y(self, df: pd.DataFrame, device: Optional[str] = None) -> List[torch.Tensor]:
y = torch.from_numpy(df.values)
if self._target_tensor_type:
y = y.to(self._target_tensor_type)
if self._squeeze_target_tensor:
y = y.squeeze()
if device:
y = y.to(device)
return [y]

View File

@ -0,0 +1,97 @@
import logging
from typing import List
import torch
from torch import nn
logger = logging.getLogger(__name__)
class PyTorchMLPModel(nn.Module):
"""
A multi-layer perceptron (MLP) model implemented using PyTorch.
This class mainly serves as a simple example for the integration of PyTorch model's
to freqai. It is not optimized at all and should not be used for production purposes.
:param input_dim: The number of input features. This parameter specifies the number
of features in the input data that the MLP will use to make predictions.
:param output_dim: The number of output classes. This parameter specifies the number
of classes that the MLP will predict.
:param hidden_dim: The number of hidden units in each layer. This parameter controls
the complexity of the MLP and determines how many nonlinear relationships the MLP
can represent. Increasing the number of hidden units can increase the capacity of
the MLP to model complex patterns, but it also increases the risk of overfitting
the training data. Default: 256
:param dropout_percent: The dropout rate for regularization. This parameter specifies
the probability of dropping out a neuron during training to prevent overfitting.
The dropout rate should be tuned carefully to balance between underfitting and
overfitting. Default: 0.2
:param n_layer: The number of layers in the MLP. This parameter specifies the number
of layers in the MLP architecture. Adding more layers to the MLP can increase its
capacity to model complex patterns, but it also increases the risk of overfitting
the training data. Default: 1
:returns: The output of the MLP, with shape (batch_size, output_dim)
"""
def __init__(self, input_dim: int, output_dim: int, **kwargs):
super().__init__()
hidden_dim: int = kwargs.get("hidden_dim", 256)
dropout_percent: int = kwargs.get("dropout_percent", 0.2)
n_layer: int = kwargs.get("n_layer", 1)
self.input_layer = nn.Linear(input_dim, hidden_dim)
self.blocks = nn.Sequential(*[Block(hidden_dim, dropout_percent) for _ in range(n_layer)])
self.output_layer = nn.Linear(hidden_dim, output_dim)
self.relu = nn.ReLU()
self.dropout = nn.Dropout(p=dropout_percent)
def forward(self, tensors: List[torch.Tensor]) -> torch.Tensor:
x: torch.Tensor = tensors[0]
x = self.relu(self.input_layer(x))
x = self.dropout(x)
x = self.blocks(x)
x = self.output_layer(x)
return x
class Block(nn.Module):
"""
A building block for a multi-layer perceptron (MLP).
:param hidden_dim: The number of hidden units in the feedforward network.
:param dropout_percent: The dropout rate for regularization.
:returns: torch.Tensor. with shape (batch_size, hidden_dim)
"""
def __init__(self, hidden_dim: int, dropout_percent: int):
super().__init__()
self.ff = FeedForward(hidden_dim)
self.dropout = nn.Dropout(p=dropout_percent)
self.ln = nn.LayerNorm(hidden_dim)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.ff(self.ln(x))
x = self.dropout(x)
return x
class FeedForward(nn.Module):
"""
A simple fully-connected feedforward neural network block.
:param hidden_dim: The number of hidden units in the block.
:return: torch.Tensor. with shape (batch_size, hidden_dim)
"""
def __init__(self, hidden_dim: int):
super().__init__()
self.net = nn.Sequential(
nn.Linear(hidden_dim, hidden_dim),
nn.ReLU(),
)
def forward(self, x: torch.Tensor) -> torch.Tensor:
return self.net(x)

View File

@ -0,0 +1,208 @@
import logging
import math
from pathlib import Path
from typing import Any, Dict, List, Optional
import pandas as pd
import torch
from torch import nn
from torch.optim import Optimizer
from torch.utils.data import DataLoader, TensorDataset
from freqtrade.freqai.torch.PyTorchDataConvertor import PyTorchDataConvertor
from freqtrade.freqai.torch.PyTorchTrainerInterface import PyTorchTrainerInterface
logger = logging.getLogger(__name__)
class PyTorchModelTrainer(PyTorchTrainerInterface):
def __init__(
self,
model: nn.Module,
optimizer: Optimizer,
criterion: nn.Module,
device: str,
init_model: Dict,
data_convertor: PyTorchDataConvertor,
model_meta_data: Dict[str, Any] = {},
**kwargs
):
"""
:param model: The PyTorch model to be trained.
:param optimizer: The optimizer to use for training.
:param criterion: The loss function to use for training.
:param device: The device to use for training (e.g. 'cpu', 'cuda').
:param init_model: A dictionary containing the initial model/optimizer
state_dict and model_meta_data saved by self.save() method.
:param model_meta_data: Additional metadata about the model (optional).
:param data_convertor: convertor from pd.DataFrame to torch.tensor.
:param max_iters: The number of training iterations to run.
iteration here refers to the number of times we call
self.optimizer.step(). used to calculate n_epochs.
:param batch_size: The size of the batches to use during training.
:param max_n_eval_batches: The maximum number batches to use for evaluation.
"""
self.model = model
self.optimizer = optimizer
self.criterion = criterion
self.model_meta_data = model_meta_data
self.device = device
self.max_iters: int = kwargs.get("max_iters", 100)
self.batch_size: int = kwargs.get("batch_size", 64)
self.max_n_eval_batches: Optional[int] = kwargs.get("max_n_eval_batches", None)
self.data_convertor = data_convertor
if init_model:
self.load_from_checkpoint(init_model)
def fit(self, data_dictionary: Dict[str, pd.DataFrame], splits: List[str]):
"""
:param data_dictionary: the dictionary constructed by DataHandler to hold
all the training and test data/labels.
:param splits: splits to use in training, splits must contain "train",
optional "test" could be added by setting freqai.data_split_parameters.test_size > 0
in the config file.
- Calculates the predicted output for the batch using the PyTorch model.
- Calculates the loss between the predicted and actual output using a loss function.
- Computes the gradients of the loss with respect to the model's parameters using
backpropagation.
- Updates the model's parameters using an optimizer.
"""
data_loaders_dictionary = self.create_data_loaders_dictionary(data_dictionary, splits)
epochs = self.calc_n_epochs(
n_obs=len(data_dictionary["train_features"]),
batch_size=self.batch_size,
n_iters=self.max_iters
)
for epoch in range(1, epochs + 1):
# training
losses = []
for i, batch_data in enumerate(data_loaders_dictionary["train"]):
for tensor in batch_data:
tensor.to(self.device)
xb = batch_data[:-1]
yb = batch_data[-1]
yb_pred = self.model(xb)
loss = self.criterion(yb_pred, yb)
self.optimizer.zero_grad(set_to_none=True)
loss.backward()
self.optimizer.step()
losses.append(loss.item())
train_loss = sum(losses) / len(losses)
log_message = f"epoch {epoch}/{epochs}: train loss {train_loss:.4f}"
# evaluation
if "test" in splits:
test_loss = self.estimate_loss(
data_loaders_dictionary,
self.max_n_eval_batches,
"test"
)
log_message += f" ; test loss {test_loss:.4f}"
logger.info(log_message)
@torch.no_grad()
def estimate_loss(
self,
data_loader_dictionary: Dict[str, DataLoader],
max_n_eval_batches: Optional[int],
split: str,
) -> float:
self.model.eval()
n_batches = 0
losses = []
for i, batch_data in enumerate(data_loader_dictionary[split]):
if max_n_eval_batches and i > max_n_eval_batches:
n_batches += 1
break
for tensor in batch_data:
tensor.to(self.device)
xb = batch_data[:-1]
yb = batch_data[-1]
yb_pred = self.model(xb)
loss = self.criterion(yb_pred, yb)
losses.append(loss.item())
self.model.train()
return sum(losses) / len(losses)
def create_data_loaders_dictionary(
self,
data_dictionary: Dict[str, pd.DataFrame],
splits: List[str]
) -> Dict[str, DataLoader]:
"""
Converts the input data to PyTorch tensors using a data loader.
"""
data_loader_dictionary = {}
for split in splits:
x = self.data_convertor.convert_x(data_dictionary[f"{split}_features"])
y = self.data_convertor.convert_y(data_dictionary[f"{split}_labels"])
dataset = TensorDataset(*x, *y)
data_loader = DataLoader(
dataset,
batch_size=self.batch_size,
shuffle=True,
drop_last=True,
num_workers=0,
)
data_loader_dictionary[split] = data_loader
return data_loader_dictionary
@staticmethod
def calc_n_epochs(n_obs: int, batch_size: int, n_iters: int) -> int:
"""
Calculates the number of epochs required to reach the maximum number
of iterations specified in the model training parameters.
the motivation here is that `max_iters` is easier to optimize and keep stable,
across different n_obs - the number of data points.
"""
n_batches = math.ceil(n_obs // batch_size)
epochs = math.ceil(n_iters // n_batches)
if epochs <= 10:
logger.warning("User set `max_iters` in such a way that the trainer will only perform "
f" {epochs} epochs. Please consider increasing this value accordingly")
if epochs <= 1:
logger.warning("Epochs set to 1. Please review your `max_iters` value")
epochs = 1
return epochs
def save(self, path: Path):
"""
- Saving any nn.Module state_dict
- Saving model_meta_data, this dict should contain any additional data that the
user needs to store. e.g class_names for classification models.
"""
torch.save({
"model_state_dict": self.model.state_dict(),
"optimizer_state_dict": self.optimizer.state_dict(),
"model_meta_data": self.model_meta_data,
"pytrainer": self
}, path)
def load(self, path: Path):
checkpoint = torch.load(path)
return self.load_from_checkpoint(checkpoint)
def load_from_checkpoint(self, checkpoint: Dict):
"""
when using continual_learning, DataDrawer will load the dictionary
(containing state dicts and model_meta_data) by calling torch.load(path).
you can access this dict from any class that inherits IFreqaiModel by calling
get_init_model method.
"""
self.model.load_state_dict(checkpoint["model_state_dict"])
self.optimizer.load_state_dict(checkpoint["optimizer_state_dict"])
self.model_meta_data = checkpoint["model_meta_data"]
return self

View File

@ -0,0 +1,53 @@
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Dict, List
import pandas as pd
import torch
from torch import nn
class PyTorchTrainerInterface(ABC):
@abstractmethod
def fit(self, data_dictionary: Dict[str, pd.DataFrame], splits: List[str]) -> None:
"""
:param data_dictionary: the dictionary constructed by DataHandler to hold
all the training and test data/labels.
:param splits: splits to use in training, splits must contain "train",
optional "test" could be added by setting freqai.data_split_parameters.test_size > 0
in the config file.
- Calculates the predicted output for the batch using the PyTorch model.
- Calculates the loss between the predicted and actual output using a loss function.
- Computes the gradients of the loss with respect to the model's parameters using
backpropagation.
- Updates the model's parameters using an optimizer.
"""
@abstractmethod
def save(self, path: Path) -> None:
"""
- Saving any nn.Module state_dict
- Saving model_meta_data, this dict should contain any additional data that the
user needs to store. e.g class_names for classification models.
"""
def load(self, path: Path) -> nn.Module:
"""
:param path: path to zip file.
:returns: pytorch model.
"""
checkpoint = torch.load(path)
return self.load_from_checkpoint(checkpoint)
@abstractmethod
def load_from_checkpoint(self, checkpoint: Dict) -> nn.Module:
"""
when using continual_learning, DataDrawer will load the dictionary
(containing state dicts and model_meta_data) by calling torch.load(path).
you can access this dict from any class that inherits IFreqaiModel by calling
get_init_model method.
:checkpoint checkpoint: dict containing the model & optimizer state dicts,
model_meta_data, etc..
"""

View File

View File

@ -26,6 +26,7 @@ from freqtrade.exchange import (ROUND_DOWN, ROUND_UP, timeframe_to_minutes, time
from freqtrade.misc import safe_value_fallback, safe_value_fallback2 from freqtrade.misc import safe_value_fallback, safe_value_fallback2
from freqtrade.mixins import LoggingMixin from freqtrade.mixins import LoggingMixin
from freqtrade.persistence import Order, PairLocks, Trade, init_db from freqtrade.persistence import Order, PairLocks, Trade, init_db
from freqtrade.persistence.key_value_store import set_startup_time
from freqtrade.plugins.pairlistmanager import PairListManager from freqtrade.plugins.pairlistmanager import PairListManager
from freqtrade.plugins.protectionmanager import ProtectionManager from freqtrade.plugins.protectionmanager import ProtectionManager
from freqtrade.resolvers import ExchangeResolver, StrategyResolver from freqtrade.resolvers import ExchangeResolver, StrategyResolver
@ -182,6 +183,7 @@ class FreqtradeBot(LoggingMixin):
performs startup tasks performs startup tasks
""" """
migrate_binance_futures_names(self.config) migrate_binance_futures_names(self.config)
set_startup_time()
self.rpc.startup_messages(self.config, self.pairlists, self.protections) self.rpc.startup_messages(self.config, self.pairlists, self.protections)
# Update older trades with precision and precision mode # Update older trades with precision and precision mode

View File

@ -1,11 +1,24 @@
import logging import logging
import sys
from logging import Formatter from logging import Formatter
from logging.handlers import RotatingFileHandler, SysLogHandler from logging.handlers import BufferingHandler, RotatingFileHandler, SysLogHandler
from freqtrade.constants import Config from freqtrade.constants import Config
from freqtrade.exceptions import OperationalException from freqtrade.exceptions import OperationalException
from freqtrade.loggers.buffering_handler import FTBufferingHandler
from freqtrade.loggers.std_err_stream_handler import FTStdErrStreamHandler
class FTBufferingHandler(BufferingHandler):
def flush(self):
"""
Override Flush behaviour - we keep half of the configured capacity
otherwise, we have moments with "empty" logs.
"""
self.acquire()
try:
# Keep half of the records in buffer.
self.buffer = self.buffer[-int(self.capacity / 2):]
finally:
self.release()
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -56,7 +69,7 @@ def setup_logging_pre() -> None:
logging.basicConfig( logging.basicConfig(
level=logging.INFO, level=logging.INFO,
format=LOGFORMAT, format=LOGFORMAT,
handlers=[FTStdErrStreamHandler(), bufferHandler] handlers=[logging.StreamHandler(sys.stderr), bufferHandler]
) )

View File

@ -1,15 +0,0 @@
from logging.handlers import BufferingHandler
class FTBufferingHandler(BufferingHandler):
def flush(self):
"""
Override Flush behaviour - we keep half of the configured capacity
otherwise, we have moments with "empty" logs.
"""
self.acquire()
try:
# Keep half of the records in buffer.
self.buffer = self.buffer[-int(self.capacity / 2):]
finally:
self.release()

View File

@ -1,26 +0,0 @@
import sys
from logging import Handler
class FTStdErrStreamHandler(Handler):
def flush(self):
"""
Override Flush behaviour - we keep half of the configured capacity
otherwise, we have moments with "empty" logs.
"""
self.acquire()
try:
sys.stderr.flush()
finally:
self.release()
def emit(self, record):
try:
msg = self.format(record)
# Don't keep a reference to stderr - this can be problematic with progressbars.
sys.stderr.write(msg + '\n')
self.flush()
except RecursionError:
raise
except Exception:
self.handleError(record)

View File

@ -13,13 +13,13 @@ from math import ceil
from pathlib import Path from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple from typing import Any, Dict, List, Optional, Tuple
import progressbar
import rapidjson import rapidjson
from colorama import Fore, Style
from colorama import init as colorama_init from colorama import init as colorama_init
from joblib import Parallel, cpu_count, delayed, dump, load, wrap_non_picklable_objects from joblib import Parallel, cpu_count, delayed, dump, load, wrap_non_picklable_objects
from joblib.externals import cloudpickle from joblib.externals import cloudpickle
from pandas import DataFrame from pandas import DataFrame
from rich.progress import (BarColumn, MofNCompleteColumn, Progress, TaskProgressColumn, TextColumn,
TimeElapsedColumn, TimeRemainingColumn)
from freqtrade.constants import DATETIME_PRINT_FORMAT, FTHYPT_FILEVERSION, LAST_BT_RESULT_FN, Config from freqtrade.constants import DATETIME_PRINT_FORMAT, FTHYPT_FILEVERSION, LAST_BT_RESULT_FN, Config
from freqtrade.data.converter import trim_dataframes from freqtrade.data.converter import trim_dataframes
@ -44,6 +44,8 @@ with warnings.catch_warnings():
from skopt import Optimizer from skopt import Optimizer
from skopt.space import Dimension from skopt.space import Dimension
progressbar.streams.wrap_stderr()
progressbar.streams.wrap_stdout()
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -518,6 +520,29 @@ class Hyperopt:
else: else:
return self.opt.ask(n_points=n_points), [False for _ in range(n_points)] return self.opt.ask(n_points=n_points), [False for _ in range(n_points)]
def get_progressbar_widgets(self):
if self.print_colorized:
widgets = [
' [Epoch ', progressbar.Counter(), ' of ', str(self.total_epochs),
' (', progressbar.Percentage(), ')] ',
progressbar.Bar(marker=progressbar.AnimatedMarker(
fill='\N{FULL BLOCK}',
fill_wrap=Fore.GREEN + '{}' + Fore.RESET,
marker_wrap=Style.BRIGHT + '{}' + Style.RESET_ALL,
)),
' [', progressbar.ETA(), ', ', progressbar.Timer(), ']',
]
else:
widgets = [
' [Epoch ', progressbar.Counter(), ' of ', str(self.total_epochs),
' (', progressbar.Percentage(), ')] ',
progressbar.Bar(marker=progressbar.AnimatedMarker(
fill='\N{FULL BLOCK}',
)),
' [', progressbar.ETA(), ', ', progressbar.Timer(), ']',
]
return widgets
def evaluate_result(self, val: Dict[str, Any], current: int, is_random: bool): def evaluate_result(self, val: Dict[str, Any], current: int, is_random: bool):
""" """
Evaluate results returned from generate_optimizer Evaluate results returned from generate_optimizer
@ -577,19 +602,11 @@ class Hyperopt:
logger.info(f'Effective number of parallel workers used: {jobs}') logger.info(f'Effective number of parallel workers used: {jobs}')
# Define progressbar # Define progressbar
with Progress( widgets = self.get_progressbar_widgets()
TextColumn("[progress.description]{task.description}"), with progressbar.ProgressBar(
BarColumn(bar_width=None), max_value=self.total_epochs, redirect_stdout=False, redirect_stderr=False,
MofNCompleteColumn(), widgets=widgets
TaskProgressColumn(),
"",
TimeElapsedColumn(),
"",
TimeRemainingColumn(),
expand=True,
) as pbar: ) as pbar:
task = pbar.add_task("Epochs", total=self.total_epochs)
start = 0 start = 0
if self.analyze_per_epoch: if self.analyze_per_epoch:
@ -599,7 +616,7 @@ class Hyperopt:
f_val0 = self.generate_optimizer(asked[0]) f_val0 = self.generate_optimizer(asked[0])
self.opt.tell(asked, [f_val0['loss']]) self.opt.tell(asked, [f_val0['loss']])
self.evaluate_result(f_val0, 1, is_random[0]) self.evaluate_result(f_val0, 1, is_random[0])
pbar.update(task, advance=1) pbar.update(1)
start += 1 start += 1
evals = ceil((self.total_epochs - start) / jobs) evals = ceil((self.total_epochs - start) / jobs)
@ -613,12 +630,14 @@ class Hyperopt:
f_val = self.run_optimizer_parallel(parallel, asked) f_val = self.run_optimizer_parallel(parallel, asked)
self.opt.tell(asked, [v['loss'] for v in f_val]) self.opt.tell(asked, [v['loss'] for v in f_val])
# Calculate progressbar outputs
for j, val in enumerate(f_val): for j, val in enumerate(f_val):
# Use human-friendly indexes here (starting from 1) # Use human-friendly indexes here (starting from 1)
current = i * jobs + j + 1 + start current = i * jobs + j + 1 + start
self.evaluate_result(val, current, is_random[j]) self.evaluate_result(val, current, is_random[j])
pbar.update(task, advance=1)
pbar.update(current)
except KeyboardInterrupt: except KeyboardInterrupt:
print('User interrupted..') print('User interrupted..')

View File

@ -1,5 +1,6 @@
# flake8: noqa: F401 # flake8: noqa: F401
from freqtrade.persistence.key_value_store import KeyStoreKeys, KeyValueStore
from freqtrade.persistence.models import init_db from freqtrade.persistence.models import init_db
from freqtrade.persistence.pairlock_middleware import PairLocks from freqtrade.persistence.pairlock_middleware import PairLocks
from freqtrade.persistence.trade_model import LocalTrade, Order, Trade from freqtrade.persistence.trade_model import LocalTrade, Order, Trade

View File

@ -0,0 +1,179 @@
from datetime import datetime, timezone
from enum import Enum
from typing import ClassVar, Optional, Union
from sqlalchemy import String
from sqlalchemy.orm import Mapped, mapped_column
from freqtrade.persistence.base import ModelBase, SessionType
ValueTypes = Union[str, datetime, float, int]
class ValueTypesEnum(str, Enum):
STRING = 'str'
DATETIME = 'datetime'
FLOAT = 'float'
INT = 'int'
class KeyStoreKeys(str, Enum):
BOT_START_TIME = 'bot_start_time'
STARTUP_TIME = 'startup_time'
class _KeyValueStoreModel(ModelBase):
"""
Pair Locks database model.
"""
__tablename__ = 'KeyValueStore'
session: ClassVar[SessionType]
id: Mapped[int] = mapped_column(primary_key=True)
key: Mapped[KeyStoreKeys] = mapped_column(String(25), nullable=False, index=True)
value_type: Mapped[ValueTypesEnum] = mapped_column(String(20), nullable=False)
string_value: Mapped[Optional[str]]
datetime_value: Mapped[Optional[datetime]]
float_value: Mapped[Optional[float]]
int_value: Mapped[Optional[int]]
class KeyValueStore():
"""
Generic bot-wide, persistent key-value store
Can be used to store generic values, e.g. very first bot startup time.
Supports the types str, datetime, float and int.
"""
@staticmethod
def store_value(key: KeyStoreKeys, value: ValueTypes) -> None:
"""
Store the given value for the given key.
:param key: Key to store the value for - can be used in get-value to retrieve the key
:param value: Value to store - can be str, datetime, float or int
"""
kv = _KeyValueStoreModel.session.query(_KeyValueStoreModel).filter(
_KeyValueStoreModel.key == key).first()
if kv is None:
kv = _KeyValueStoreModel(key=key)
if isinstance(value, str):
kv.value_type = ValueTypesEnum.STRING
kv.string_value = value
elif isinstance(value, datetime):
kv.value_type = ValueTypesEnum.DATETIME
kv.datetime_value = value
elif isinstance(value, float):
kv.value_type = ValueTypesEnum.FLOAT
kv.float_value = value
elif isinstance(value, int):
kv.value_type = ValueTypesEnum.INT
kv.int_value = value
else:
raise ValueError(f'Unknown value type {kv.value_type}')
_KeyValueStoreModel.session.add(kv)
_KeyValueStoreModel.session.commit()
@staticmethod
def delete_value(key: KeyStoreKeys) -> None:
"""
Delete the value for the given key.
:param key: Key to delete the value for
"""
kv = _KeyValueStoreModel.session.query(_KeyValueStoreModel).filter(
_KeyValueStoreModel.key == key).first()
if kv is not None:
_KeyValueStoreModel.session.delete(kv)
_KeyValueStoreModel.session.commit()
@staticmethod
def get_value(key: KeyStoreKeys) -> Optional[ValueTypes]:
"""
Get the value for the given key.
:param key: Key to get the value for
"""
kv = _KeyValueStoreModel.session.query(_KeyValueStoreModel).filter(
_KeyValueStoreModel.key == key).first()
if kv is None:
return None
if kv.value_type == ValueTypesEnum.STRING:
return kv.string_value
if kv.value_type == ValueTypesEnum.DATETIME and kv.datetime_value is not None:
return kv.datetime_value.replace(tzinfo=timezone.utc)
if kv.value_type == ValueTypesEnum.FLOAT:
return kv.float_value
if kv.value_type == ValueTypesEnum.INT:
return kv.int_value
# This should never happen unless someone messed with the database manually
raise ValueError(f'Unknown value type {kv.value_type}') # pragma: no cover
@staticmethod
def get_string_value(key: KeyStoreKeys) -> Optional[str]:
"""
Get the value for the given key.
:param key: Key to get the value for
"""
kv = _KeyValueStoreModel.session.query(_KeyValueStoreModel).filter(
_KeyValueStoreModel.key == key,
_KeyValueStoreModel.value_type == ValueTypesEnum.STRING).first()
if kv is None:
return None
return kv.string_value
@staticmethod
def get_datetime_value(key: KeyStoreKeys) -> Optional[datetime]:
"""
Get the value for the given key.
:param key: Key to get the value for
"""
kv = _KeyValueStoreModel.session.query(_KeyValueStoreModel).filter(
_KeyValueStoreModel.key == key,
_KeyValueStoreModel.value_type == ValueTypesEnum.DATETIME).first()
if kv is None or kv.datetime_value is None:
return None
return kv.datetime_value.replace(tzinfo=timezone.utc)
@staticmethod
def get_float_value(key: KeyStoreKeys) -> Optional[float]:
"""
Get the value for the given key.
:param key: Key to get the value for
"""
kv = _KeyValueStoreModel.session.query(_KeyValueStoreModel).filter(
_KeyValueStoreModel.key == key,
_KeyValueStoreModel.value_type == ValueTypesEnum.FLOAT).first()
if kv is None:
return None
return kv.float_value
@staticmethod
def get_int_value(key: KeyStoreKeys) -> Optional[int]:
"""
Get the value for the given key.
:param key: Key to get the value for
"""
kv = _KeyValueStoreModel.session.query(_KeyValueStoreModel).filter(
_KeyValueStoreModel.key == key,
_KeyValueStoreModel.value_type == ValueTypesEnum.INT).first()
if kv is None:
return None
return kv.int_value
def set_startup_time():
"""
sets bot_start_time to the first trade open date - or "now" on new databases.
sets startup_time to "now"
"""
st = KeyValueStore.get_value('bot_start_time')
if st is None:
from freqtrade.persistence import Trade
t = Trade.session.query(Trade).order_by(Trade.open_date.asc()).first()
if t is not None:
KeyValueStore.store_value('bot_start_time', t.open_date_utc)
else:
KeyValueStore.store_value('bot_start_time', datetime.now(timezone.utc))
KeyValueStore.store_value('startup_time', datetime.now(timezone.utc))

View File

@ -13,6 +13,7 @@ from sqlalchemy.pool import StaticPool
from freqtrade.exceptions import OperationalException from freqtrade.exceptions import OperationalException
from freqtrade.persistence.base import ModelBase from freqtrade.persistence.base import ModelBase
from freqtrade.persistence.key_value_store import _KeyValueStoreModel
from freqtrade.persistence.migrations import check_migrate from freqtrade.persistence.migrations import check_migrate
from freqtrade.persistence.pairlock import PairLock from freqtrade.persistence.pairlock import PairLock
from freqtrade.persistence.trade_model import Order, Trade from freqtrade.persistence.trade_model import Order, Trade
@ -76,6 +77,7 @@ def init_db(db_url: str) -> None:
bind=engine, autoflush=False), scopefunc=get_request_or_thread_id) bind=engine, autoflush=False), scopefunc=get_request_or_thread_id)
Order.session = Trade.session Order.session = Trade.session
PairLock.session = Trade.session PairLock.session = Trade.session
_KeyValueStoreModel.session = Trade.session
previous_tables = inspect(engine).get_table_names() previous_tables = inspect(engine).get_table_names()
ModelBase.metadata.create_all(engine) ModelBase.metadata.create_all(engine)

View File

@ -9,10 +9,10 @@ from typing import Any, ClassVar, Dict, List, Optional, Sequence, cast
from sqlalchemy import (Enum, Float, ForeignKey, Integer, ScalarResult, Select, String, from sqlalchemy import (Enum, Float, ForeignKey, Integer, ScalarResult, Select, String,
UniqueConstraint, desc, func, select) UniqueConstraint, desc, func, select)
from sqlalchemy.orm import Mapped, lazyload, mapped_column, relationship from sqlalchemy.orm import Mapped, lazyload, mapped_column, relationship, validates
from freqtrade.constants import (DATETIME_PRINT_FORMAT, MATH_CLOSE_PREC, NON_OPEN_EXCHANGE_STATES, from freqtrade.constants import (CUSTOM_TAG_MAX_LENGTH, DATETIME_PRINT_FORMAT, MATH_CLOSE_PREC,
BuySell, LongShort) NON_OPEN_EXCHANGE_STATES, BuySell, LongShort)
from freqtrade.enums import ExitType, TradingMode from freqtrade.enums import ExitType, TradingMode
from freqtrade.exceptions import DependencyException, OperationalException from freqtrade.exceptions import DependencyException, OperationalException
from freqtrade.exchange import (ROUND_DOWN, ROUND_UP, amount_to_contract_precision, from freqtrade.exchange import (ROUND_DOWN, ROUND_UP, amount_to_contract_precision,
@ -1259,11 +1259,13 @@ class Trade(ModelBase, LocalTrade):
Float(), nullable=True, default=0.0) # type: ignore Float(), nullable=True, default=0.0) # type: ignore
# Lowest price reached # Lowest price reached
min_rate: Mapped[Optional[float]] = mapped_column(Float(), nullable=True) # type: ignore min_rate: Mapped[Optional[float]] = mapped_column(Float(), nullable=True) # type: ignore
exit_reason: Mapped[Optional[str]] = mapped_column(String(100), nullable=True) # type: ignore exit_reason: Mapped[Optional[str]] = mapped_column(
String(CUSTOM_TAG_MAX_LENGTH), nullable=True) # type: ignore
exit_order_status: Mapped[Optional[str]] = mapped_column( exit_order_status: Mapped[Optional[str]] = mapped_column(
String(100), nullable=True) # type: ignore String(100), nullable=True) # type: ignore
strategy: Mapped[Optional[str]] = mapped_column(String(100), nullable=True) # type: ignore strategy: Mapped[Optional[str]] = mapped_column(String(100), nullable=True) # type: ignore
enter_tag: Mapped[Optional[str]] = mapped_column(String(100), nullable=True) # type: ignore enter_tag: Mapped[Optional[str]] = mapped_column(
String(CUSTOM_TAG_MAX_LENGTH), nullable=True) # type: ignore
timeframe: Mapped[Optional[int]] = mapped_column(Integer, nullable=True) # type: ignore timeframe: Mapped[Optional[int]] = mapped_column(Integer, nullable=True) # type: ignore
trading_mode: Mapped[TradingMode] = mapped_column( trading_mode: Mapped[TradingMode] = mapped_column(
@ -1293,6 +1295,13 @@ class Trade(ModelBase, LocalTrade):
self.realized_profit = 0 self.realized_profit = 0
self.recalc_open_trade_value() self.recalc_open_trade_value()
@validates('enter_tag', 'exit_reason')
def validate_string_len(self, key, value):
max_len = getattr(self.__class__, key).prop.columns[0].type.length
if value and len(value) > max_len:
return value[:max_len]
return value
def delete(self) -> None: def delete(self) -> None:
for order in self.orders: for order in self.orders:

View File

@ -108,6 +108,8 @@ class Profit(BaseModel):
max_drawdown: float max_drawdown: float
max_drawdown_abs: float max_drawdown_abs: float
trading_volume: Optional[float] trading_volume: Optional[float]
bot_start_timestamp: int
bot_start_date: str
class SellReason(BaseModel): class SellReason(BaseModel):

View File

@ -26,7 +26,7 @@ from freqtrade.exceptions import ExchangeError, PricingError
from freqtrade.exchange import timeframe_to_minutes, timeframe_to_msecs from freqtrade.exchange import timeframe_to_minutes, timeframe_to_msecs
from freqtrade.loggers import bufferHandler from freqtrade.loggers import bufferHandler
from freqtrade.misc import decimals_per_coin, shorten_date from freqtrade.misc import decimals_per_coin, shorten_date
from freqtrade.persistence import Order, PairLocks, Trade from freqtrade.persistence import KeyStoreKeys, KeyValueStore, Order, PairLocks, Trade
from freqtrade.persistence.models import PairLock from freqtrade.persistence.models import PairLock
from freqtrade.plugins.pairlist.pairlist_helpers import expand_pairlist from freqtrade.plugins.pairlist.pairlist_helpers import expand_pairlist
from freqtrade.rpc.fiat_convert import CryptoToFiatConverter from freqtrade.rpc.fiat_convert import CryptoToFiatConverter
@ -543,6 +543,7 @@ class RPC:
first_date = trades[0].open_date if trades else None first_date = trades[0].open_date if trades else None
last_date = trades[-1].open_date if trades else None last_date = trades[-1].open_date if trades else None
num = float(len(durations) or 1) num = float(len(durations) or 1)
bot_start = KeyValueStore.get_datetime_value(KeyStoreKeys.BOT_START_TIME)
return { return {
'profit_closed_coin': profit_closed_coin_sum, 'profit_closed_coin': profit_closed_coin_sum,
'profit_closed_percent_mean': round(profit_closed_ratio_mean * 100, 2), 'profit_closed_percent_mean': round(profit_closed_ratio_mean * 100, 2),
@ -576,6 +577,8 @@ class RPC:
'max_drawdown': max_drawdown, 'max_drawdown': max_drawdown,
'max_drawdown_abs': max_drawdown_abs, 'max_drawdown_abs': max_drawdown_abs,
'trading_volume': trading_volume, 'trading_volume': trading_volume,
'bot_start_timestamp': int(bot_start.timestamp() * 1000) if bot_start else 0,
'bot_start_date': bot_start.strftime(DATETIME_PRINT_FORMAT) if bot_start else '',
} }
def _rpc_balance(self, stake_currency: str, fiat_display_currency: str) -> Dict: def _rpc_balance(self, stake_currency: str, fiat_display_currency: str) -> Dict:
@ -1193,6 +1196,7 @@ class RPC:
from freqtrade.resolvers.strategy_resolver import StrategyResolver from freqtrade.resolvers.strategy_resolver import StrategyResolver
strategy = StrategyResolver.load_strategy(config) strategy = StrategyResolver.load_strategy(config)
strategy.dp = DataProvider(config, exchange=exchange, pairlists=None) strategy.dp = DataProvider(config, exchange=exchange, pairlists=None)
strategy.ft_bot_start()
df_analyzed = strategy.analyze_ticker(_data[pair], {'pair': pair}) df_analyzed = strategy.analyze_ticker(_data[pair], {'pair': pair})

View File

@ -819,7 +819,7 @@ class Telegram(RPCHandler):
best_pair = stats['best_pair'] best_pair = stats['best_pair']
best_pair_profit_ratio = stats['best_pair_profit_ratio'] best_pair_profit_ratio = stats['best_pair_profit_ratio']
if stats['trade_count'] == 0: if stats['trade_count'] == 0:
markdown_msg = 'No trades yet.' markdown_msg = f"No trades yet.\n*Bot started:* `{stats['bot_start_date']}`"
else: else:
# Message to display # Message to display
if stats['closed_trade_count'] > 0: if stats['closed_trade_count'] > 0:
@ -838,6 +838,7 @@ class Telegram(RPCHandler):
f"({profit_all_percent} \N{GREEK CAPITAL LETTER SIGMA}%)`\n" f"({profit_all_percent} \N{GREEK CAPITAL LETTER SIGMA}%)`\n"
f"∙ `{round_coin_value(profit_all_fiat, fiat_disp_cur)}`\n" f"∙ `{round_coin_value(profit_all_fiat, fiat_disp_cur)}`\n"
f"*Total Trade Count:* `{trade_count}`\n" f"*Total Trade Count:* `{trade_count}`\n"
f"*Bot started:* `{stats['bot_start_date']}`\n"
f"*{'First Trade opened' if not timescale else 'Showing Profit since'}:* " f"*{'First Trade opened' if not timescale else 'Showing Profit since'}:* "
f"`{first_trade_date}`\n" f"`{first_trade_date}`\n"
f"*Latest Trade opened:* `{latest_trade_date}`\n" f"*Latest Trade opened:* `{latest_trade_date}`\n"

View File

@ -10,7 +10,7 @@ from typing import Dict, List, Optional, Tuple, Union
import arrow import arrow
from pandas import DataFrame from pandas import DataFrame
from freqtrade.constants import Config, IntOrInf, ListPairsWithTimeframes from freqtrade.constants import CUSTOM_TAG_MAX_LENGTH, Config, IntOrInf, ListPairsWithTimeframes
from freqtrade.data.dataprovider import DataProvider from freqtrade.data.dataprovider import DataProvider
from freqtrade.enums import (CandleType, ExitCheckTuple, ExitType, MarketDirection, RunMode, from freqtrade.enums import (CandleType, ExitCheckTuple, ExitType, MarketDirection, RunMode,
SignalDirection, SignalTagType, SignalType, TradingMode) SignalDirection, SignalTagType, SignalType, TradingMode)
@ -27,7 +27,6 @@ from freqtrade.wallets import Wallets
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
CUSTOM_EXIT_MAX_LENGTH = 64
class IStrategy(ABC, HyperStrategyMixin): class IStrategy(ABC, HyperStrategyMixin):
@ -1118,11 +1117,11 @@ class IStrategy(ABC, HyperStrategyMixin):
exit_signal = ExitType.CUSTOM_EXIT exit_signal = ExitType.CUSTOM_EXIT
if isinstance(reason_cust, str): if isinstance(reason_cust, str):
custom_reason = reason_cust custom_reason = reason_cust
if len(reason_cust) > CUSTOM_EXIT_MAX_LENGTH: if len(reason_cust) > CUSTOM_TAG_MAX_LENGTH:
logger.warning(f'Custom exit reason returned from ' logger.warning(f'Custom exit reason returned from '
f'custom_exit is too long and was trimmed' f'custom_exit is too long and was trimmed'
f'to {CUSTOM_EXIT_MAX_LENGTH} characters.') f'to {CUSTOM_TAG_MAX_LENGTH} characters.')
custom_reason = reason_cust[:CUSTOM_EXIT_MAX_LENGTH] custom_reason = reason_cust[:CUSTOM_TAG_MAX_LENGTH]
else: else:
custom_reason = '' custom_reason = ''
if ( if (

View File

@ -223,6 +223,7 @@ class FreqaiExampleHybridStrategy(IStrategy):
:param metadata: metadata of current pair :param metadata: metadata of current pair
usage example: dataframe["&-target"] = dataframe["close"].shift(-1) / dataframe["close"] usage example: dataframe["&-target"] = dataframe["close"].shift(-1) / dataframe["close"]
""" """
self.freqai.class_names = ["down", "up"]
dataframe['&s-up_or_down'] = np.where(dataframe["close"].shift(-50) > dataframe['&s-up_or_down'] = np.where(dataframe["close"].shift(-50) >
dataframe["close"], 'up', 'down') dataframe["close"], 'up', 'down')

View File

@ -7,10 +7,10 @@
-r docs/requirements-docs.txt -r docs/requirements-docs.txt
coveralls==3.3.1 coveralls==3.3.1
ruff==0.0.260 ruff==0.0.261
mypy==1.1.1 mypy==1.2.0
pre-commit==3.2.1 pre-commit==3.2.2
pytest==7.2.2 pytest==7.3.0
pytest-asyncio==0.21.0 pytest-asyncio==0.21.0
pytest-cov==4.0.0 pytest-cov==4.0.0
pytest-mock==3.10.0 pytest-mock==3.10.0
@ -22,11 +22,11 @@ time-machine==2.9.0
httpx==0.23.3 httpx==0.23.3
# Convert jupyter notebooks to markdown documents # Convert jupyter notebooks to markdown documents
nbconvert==7.2.10 nbconvert==7.3.1
# mypy types # mypy types
types-cachetools==5.3.0.5 types-cachetools==5.3.0.5
types-filelock==3.2.7 types-filelock==3.2.7
types-requests==2.28.11.17 types-requests==2.28.11.17
types-tabulate==0.9.0.2 types-tabulate==0.9.0.2
types-python-dateutil==2.8.19.11 types-python-dateutil==2.8.19.12

View File

@ -5,4 +5,5 @@
scipy==1.10.1 scipy==1.10.1
scikit-learn==1.1.3 scikit-learn==1.1.3
scikit-optimize==0.9.0 scikit-optimize==0.9.0
filelock==3.10.6 filelock==3.11.0
progressbar2==4.2.0

View File

@ -1,4 +1,4 @@
# Include all requirements to run the bot. # Include all requirements to run the bot.
-r requirements.txt -r requirements.txt
plotly==5.14.0 plotly==5.14.1

View File

@ -2,10 +2,10 @@ numpy==1.24.2
pandas==1.5.3 pandas==1.5.3
pandas-ta==0.3.14b pandas-ta==0.3.14b
ccxt==3.0.50 ccxt==3.0.59
cryptography==40.0.1 cryptography==40.0.1
aiohttp==3.8.4 aiohttp==3.8.4
SQLAlchemy==2.0.8 SQLAlchemy==2.0.9
python-telegram-bot==13.15 python-telegram-bot==13.15
arrow==1.2.3 arrow==1.2.3
cachetools==4.2.2 cachetools==4.2.2
@ -20,7 +20,6 @@ jinja2==3.1.2
tables==3.8.0 tables==3.8.0
blosc==1.11.1 blosc==1.11.1
joblib==1.2.0 joblib==1.2.0
rich==13.3.3
pyarrow==11.0.0; platform_machine != 'armv7l' pyarrow==11.0.0; platform_machine != 'armv7l'
# find first, C search in arrays # find first, C search in arrays
@ -29,7 +28,7 @@ py_find_1st==1.1.5
# Load ticker files 30% faster # Load ticker files 30% faster
python-rapidjson==1.10 python-rapidjson==1.10
# Properly format api responses # Properly format api responses
orjson==3.8.9 orjson==3.8.10
# Notify systemd # Notify systemd
sdnotify==0.3.2 sdnotify==0.3.2
@ -51,10 +50,10 @@ prompt-toolkit==3.0.38
python-dateutil==2.8.2 python-dateutil==2.8.2
#Futures #Futures
schedule==1.1.0 schedule==1.2.0
#WS Messages #WS Messages
websockets==11.0 websockets==11.0.1
janus==1.0.0 janus==1.0.0
ast-comments==1.0.1 ast-comments==1.0.1

View File

@ -8,6 +8,7 @@ hyperopt = [
'scikit-learn', 'scikit-learn',
'scikit-optimize>=0.7.0', 'scikit-optimize>=0.7.0',
'filelock', 'filelock',
'progressbar2',
] ]
freqai = [ freqai = [
@ -81,7 +82,6 @@ setup(
'numpy', 'numpy',
'pandas', 'pandas',
'joblib>=1.2.0', 'joblib>=1.2.0',
'rich',
'pyarrow; platform_machine != "armv7l"', 'pyarrow; platform_machine != "armv7l"',
'fastapi', 'fastapi',
'pydantic>=1.8.0', 'pydantic>=1.8.0',

View File

@ -85,7 +85,7 @@ function updateenv() {
if [[ $REPLY =~ ^[Yy]$ ]] if [[ $REPLY =~ ^[Yy]$ ]]
then then
REQUIREMENTS_FREQAI="-r requirements-freqai.txt --use-pep517" REQUIREMENTS_FREQAI="-r requirements-freqai.txt --use-pep517"
read -p "Do you also want dependencies for freqai-rl (~700mb additional space required) [y/N]? " read -p "Do you also want dependencies for freqai-rl or PyTorch (~700mb additional space required) [y/N]? "
if [[ $REPLY =~ ^[Yy]$ ]] if [[ $REPLY =~ ^[Yy]$ ]]
then then
REQUIREMENTS_FREQAI="-r requirements-freqai-rl.txt" REQUIREMENTS_FREQAI="-r requirements-freqai-rl.txt"

View File

@ -1,5 +1,6 @@
from copy import deepcopy from copy import deepcopy
from pathlib import Path from pathlib import Path
from typing import Any, Dict
from unittest.mock import MagicMock from unittest.mock import MagicMock
import pytest import pytest
@ -85,6 +86,22 @@ def make_rl_config(conf):
return conf return conf
def mock_pytorch_mlp_model_training_parameters() -> Dict[str, Any]:
return {
"learning_rate": 3e-4,
"trainer_kwargs": {
"max_iters": 1,
"batch_size": 64,
"max_n_eval_batches": 1,
},
"model_kwargs": {
"hidden_dim": 32,
"dropout_percent": 0.2,
"n_layer": 1,
}
}
def get_patched_data_kitchen(mocker, freqaiconf): def get_patched_data_kitchen(mocker, freqaiconf):
dk = FreqaiDataKitchen(freqaiconf) dk = FreqaiDataKitchen(freqaiconf)
return dk return dk

View File

@ -15,7 +15,8 @@ from freqtrade.optimize.backtesting import Backtesting
from freqtrade.persistence import Trade from freqtrade.persistence import Trade
from freqtrade.plugins.pairlistmanager import PairListManager from freqtrade.plugins.pairlistmanager import PairListManager
from tests.conftest import EXMS, create_mock_trades, get_patched_exchange, log_has_re from tests.conftest import EXMS, create_mock_trades, get_patched_exchange, log_has_re
from tests.freqai.conftest import get_patched_freqai_strategy, make_rl_config from tests.freqai.conftest import (get_patched_freqai_strategy, make_rl_config,
mock_pytorch_mlp_model_training_parameters)
def is_py11() -> bool: def is_py11() -> bool:
@ -34,13 +35,14 @@ def is_mac() -> bool:
def can_run_model(model: str) -> None: def can_run_model(model: str) -> None:
if (is_arm() or is_py11()) and "Catboost" in model: if (is_arm() or is_py11()) and "Catboost" in model:
pytest.skip("CatBoost is not supported on ARM") pytest.skip("CatBoost is not supported on ARM.")
if is_mac() and not is_arm() and 'Reinforcement' in model: is_pytorch_model = 'Reinforcement' in model or 'PyTorch' in model
pytest.skip("Reinforcement learning module not available on intel based Mac OS") if is_pytorch_model and is_mac() and not is_arm():
pytest.skip("Reinforcement learning / PyTorch module not available on intel based Mac OS.")
if is_py11() and 'Reinforcement' in model: if is_pytorch_model and is_py11():
pytest.skip("Reinforcement learning currently not available on python 3.11.") pytest.skip("Reinforcement learning / PyTorch currently not available on python 3.11.")
@pytest.mark.parametrize('model, pca, dbscan, float32, can_short, shuffle, buffer', [ @pytest.mark.parametrize('model, pca, dbscan, float32, can_short, shuffle, buffer', [
@ -48,11 +50,12 @@ def can_run_model(model: str) -> None:
('XGBoostRegressor', False, True, False, True, False, 10), ('XGBoostRegressor', False, True, False, True, False, 10),
('XGBoostRFRegressor', False, False, False, True, False, 0), ('XGBoostRFRegressor', False, False, False, True, False, 0),
('CatboostRegressor', False, False, False, True, True, 0), ('CatboostRegressor', False, False, False, True, True, 0),
('PyTorchMLPRegressor', False, False, False, True, False, 0),
('ReinforcementLearner', False, True, False, True, False, 0), ('ReinforcementLearner', False, True, False, True, False, 0),
('ReinforcementLearner_multiproc', False, False, False, True, False, 0), ('ReinforcementLearner_multiproc', False, False, False, True, False, 0),
('ReinforcementLearner_test_3ac', False, False, False, False, False, 0), ('ReinforcementLearner_test_3ac', False, False, False, False, False, 0),
('ReinforcementLearner_test_3ac', False, False, False, True, False, 0), ('ReinforcementLearner_test_3ac', False, False, False, True, False, 0),
('ReinforcementLearner_test_4ac', False, False, False, True, False, 0) ('ReinforcementLearner_test_4ac', False, False, False, True, False, 0),
]) ])
def test_extract_data_and_train_model_Standard(mocker, freqai_conf, model, pca, def test_extract_data_and_train_model_Standard(mocker, freqai_conf, model, pca,
dbscan, float32, can_short, shuffle, buffer): dbscan, float32, can_short, shuffle, buffer):
@ -79,6 +82,11 @@ def test_extract_data_and_train_model_Standard(mocker, freqai_conf, model, pca,
freqai_conf["freqaimodel_path"] = str(Path(__file__).parents[1] / "freqai" / "test_models") freqai_conf["freqaimodel_path"] = str(Path(__file__).parents[1] / "freqai" / "test_models")
freqai_conf["freqai"]["rl_config"]["drop_ohlc_from_features"] = True freqai_conf["freqai"]["rl_config"]["drop_ohlc_from_features"] = True
if 'PyTorchMLPRegressor' in model:
model_save_ext = 'zip'
pytorch_mlp_mtp = mock_pytorch_mlp_model_training_parameters()
freqai_conf['freqai']['model_training_parameters'].update(pytorch_mlp_mtp)
strategy = get_patched_freqai_strategy(mocker, freqai_conf) strategy = get_patched_freqai_strategy(mocker, freqai_conf)
exchange = get_patched_exchange(mocker, freqai_conf) exchange = get_patched_exchange(mocker, freqai_conf)
strategy.dp = DataProvider(freqai_conf, exchange) strategy.dp = DataProvider(freqai_conf, exchange)
@ -123,8 +131,7 @@ def test_extract_data_and_train_model_Standard(mocker, freqai_conf, model, pca,
('CatboostClassifierMultiTarget', "freqai_test_multimodel_classifier_strat") ('CatboostClassifierMultiTarget', "freqai_test_multimodel_classifier_strat")
]) ])
def test_extract_data_and_train_model_MultiTargets(mocker, freqai_conf, model, strat): def test_extract_data_and_train_model_MultiTargets(mocker, freqai_conf, model, strat):
if (is_arm() or is_py11()) and 'Catboost' in model: can_run_model(model)
pytest.skip("CatBoost is not supported on ARM")
freqai_conf.update({"timerange": "20180110-20180130"}) freqai_conf.update({"timerange": "20180110-20180130"})
freqai_conf.update({"strategy": strat}) freqai_conf.update({"strategy": strat})
@ -164,10 +171,10 @@ def test_extract_data_and_train_model_MultiTargets(mocker, freqai_conf, model, s
'CatboostClassifier', 'CatboostClassifier',
'XGBoostClassifier', 'XGBoostClassifier',
'XGBoostRFClassifier', 'XGBoostRFClassifier',
'PyTorchMLPClassifier',
]) ])
def test_extract_data_and_train_model_Classifiers(mocker, freqai_conf, model): def test_extract_data_and_train_model_Classifiers(mocker, freqai_conf, model):
if (is_arm() or is_py11()) and model == 'CatboostClassifier': can_run_model(model)
pytest.skip("CatBoost is not supported on ARM")
freqai_conf.update({"freqaimodel": model}) freqai_conf.update({"freqaimodel": model})
freqai_conf.update({"strategy": "freqai_test_classifier"}) freqai_conf.update({"strategy": "freqai_test_classifier"})
@ -193,7 +200,20 @@ def test_extract_data_and_train_model_Classifiers(mocker, freqai_conf, model):
freqai.extract_data_and_train_model(new_timerange, "ADA/BTC", freqai.extract_data_and_train_model(new_timerange, "ADA/BTC",
strategy, freqai.dk, data_load_timerange) strategy, freqai.dk, data_load_timerange)
assert Path(freqai.dk.data_path / f"{freqai.dk.model_filename}_model.joblib").exists() if 'PyTorchMLPClassifier':
pytorch_mlp_mtp = mock_pytorch_mlp_model_training_parameters()
freqai_conf['freqai']['model_training_parameters'].update(pytorch_mlp_mtp)
if freqai.dd.model_type == 'joblib':
model_file_extension = ".joblib"
elif freqai.dd.model_type == "pytorch":
model_file_extension = ".zip"
else:
raise Exception(f"Unsupported model type: {freqai.dd.model_type},"
f" can't assign model_file_extension")
assert Path(freqai.dk.data_path /
f"{freqai.dk.model_filename}_model{model_file_extension}").exists()
assert Path(freqai.dk.data_path / f"{freqai.dk.model_filename}_metadata.json").exists() assert Path(freqai.dk.data_path / f"{freqai.dk.model_filename}_metadata.json").exists()
assert Path(freqai.dk.data_path / f"{freqai.dk.model_filename}_trained_df.pkl").exists() assert Path(freqai.dk.data_path / f"{freqai.dk.model_filename}_trained_df.pkl").exists()
assert Path(freqai.dk.data_path / f"{freqai.dk.model_filename}_svm_model.joblib").exists() assert Path(freqai.dk.data_path / f"{freqai.dk.model_filename}_svm_model.joblib").exists()
@ -207,10 +227,12 @@ def test_extract_data_and_train_model_Classifiers(mocker, freqai_conf, model):
("LightGBMRegressor", 2, "freqai_test_strat"), ("LightGBMRegressor", 2, "freqai_test_strat"),
("XGBoostRegressor", 2, "freqai_test_strat"), ("XGBoostRegressor", 2, "freqai_test_strat"),
("CatboostRegressor", 2, "freqai_test_strat"), ("CatboostRegressor", 2, "freqai_test_strat"),
("PyTorchMLPRegressor", 2, "freqai_test_strat"),
("ReinforcementLearner", 3, "freqai_rl_test_strat"), ("ReinforcementLearner", 3, "freqai_rl_test_strat"),
("XGBoostClassifier", 2, "freqai_test_classifier"), ("XGBoostClassifier", 2, "freqai_test_classifier"),
("LightGBMClassifier", 2, "freqai_test_classifier"), ("LightGBMClassifier", 2, "freqai_test_classifier"),
("CatboostClassifier", 2, "freqai_test_classifier") ("CatboostClassifier", 2, "freqai_test_classifier"),
("PyTorchMLPClassifier", 2, "freqai_test_classifier")
], ],
) )
def test_start_backtesting(mocker, freqai_conf, model, num_files, strat, caplog): def test_start_backtesting(mocker, freqai_conf, model, num_files, strat, caplog):
@ -231,6 +253,10 @@ def test_start_backtesting(mocker, freqai_conf, model, num_files, strat, caplog)
if 'test_4ac' in model: if 'test_4ac' in model:
freqai_conf["freqaimodel_path"] = str(Path(__file__).parents[1] / "freqai" / "test_models") freqai_conf["freqaimodel_path"] = str(Path(__file__).parents[1] / "freqai" / "test_models")
if 'PyTorchMLP' in model:
pytorch_mlp_mtp = mock_pytorch_mlp_model_training_parameters()
freqai_conf['freqai']['model_training_parameters'].update(pytorch_mlp_mtp)
freqai_conf.get("freqai", {}).get("feature_parameters", {}).update( freqai_conf.get("freqai", {}).get("feature_parameters", {}).update(
{"indicator_periods_candles": [2]}) {"indicator_periods_candles": [2]})

View File

@ -0,0 +1,69 @@
from datetime import datetime, timedelta, timezone
import pytest
from freqtrade.persistence.key_value_store import KeyValueStore, set_startup_time
from tests.conftest import create_mock_trades_usdt
@pytest.mark.usefixtures("init_persistence")
def test_key_value_store(time_machine):
start = datetime(2023, 1, 1, 4, tzinfo=timezone.utc)
time_machine.move_to(start, tick=False)
KeyValueStore.store_value("test", "testStringValue")
KeyValueStore.store_value("test_dt", datetime.now(timezone.utc))
KeyValueStore.store_value("test_float", 22.51)
KeyValueStore.store_value("test_int", 15)
assert KeyValueStore.get_value("test") == "testStringValue"
assert KeyValueStore.get_value("test") == "testStringValue"
assert KeyValueStore.get_string_value("test") == "testStringValue"
assert KeyValueStore.get_value("test_dt") == datetime.now(timezone.utc)
assert KeyValueStore.get_datetime_value("test_dt") == datetime.now(timezone.utc)
assert KeyValueStore.get_string_value("test_dt") is None
assert KeyValueStore.get_float_value("test_dt") is None
assert KeyValueStore.get_int_value("test_dt") is None
assert KeyValueStore.get_value("test_float") == 22.51
assert KeyValueStore.get_float_value("test_float") == 22.51
assert KeyValueStore.get_value("test_int") == 15
assert KeyValueStore.get_int_value("test_int") == 15
assert KeyValueStore.get_datetime_value("test_int") is None
time_machine.move_to(start + timedelta(days=20, hours=5), tick=False)
assert KeyValueStore.get_value("test_dt") != datetime.now(timezone.utc)
assert KeyValueStore.get_value("test_dt") == start
# Test update works
KeyValueStore.store_value("test_dt", datetime.now(timezone.utc))
assert KeyValueStore.get_value("test_dt") == datetime.now(timezone.utc)
KeyValueStore.store_value("test_float", 23.51)
assert KeyValueStore.get_value("test_float") == 23.51
# test deleting
KeyValueStore.delete_value("test_float")
assert KeyValueStore.get_value("test_float") is None
# Delete same value again (should not fail)
KeyValueStore.delete_value("test_float")
with pytest.raises(ValueError, match=r"Unknown value type"):
KeyValueStore.store_value("test_float", {'some': 'dict'})
@pytest.mark.usefixtures("init_persistence")
def test_set_startup_time(fee, time_machine):
create_mock_trades_usdt(fee)
start = datetime.now(timezone.utc)
time_machine.move_to(start, tick=False)
set_startup_time()
assert KeyValueStore.get_value("startup_time") == start
initial_time = KeyValueStore.get_value("bot_start_time")
assert initial_time <= start
# Simulate bot restart
new_start = start + timedelta(days=5)
time_machine.move_to(new_start, tick=False)
set_startup_time()
assert KeyValueStore.get_value("startup_time") == new_start
assert KeyValueStore.get_value("bot_start_time") == initial_time

View File

@ -6,7 +6,7 @@ import arrow
import pytest import pytest
from sqlalchemy import select from sqlalchemy import select
from freqtrade.constants import DATETIME_PRINT_FORMAT from freqtrade.constants import CUSTOM_TAG_MAX_LENGTH, DATETIME_PRINT_FORMAT
from freqtrade.enums import TradingMode from freqtrade.enums import TradingMode
from freqtrade.exceptions import DependencyException from freqtrade.exceptions import DependencyException
from freqtrade.persistence import LocalTrade, Order, Trade, init_db from freqtrade.persistence import LocalTrade, Order, Trade, init_db
@ -2037,6 +2037,7 @@ def test_Trade_object_idem():
'get_mix_tag_performance', 'get_mix_tag_performance',
'get_trading_volume', 'get_trading_volume',
'from_json', 'from_json',
'validate_string_len',
) )
EXCLUDES2 = ('trades', 'trades_open', 'bt_trades_open_pp', 'bt_open_open_trade_count', EXCLUDES2 = ('trades', 'trades_open', 'bt_trades_open_pp', 'bt_open_open_trade_count',
'total_profit') 'total_profit')
@ -2055,6 +2056,31 @@ def test_Trade_object_idem():
assert item in trade assert item in trade
@pytest.mark.usefixtures("init_persistence")
def test_trade_truncates_string_fields():
trade = Trade(
pair='ADA/USDT',
stake_amount=20.0,
amount=30.0,
open_rate=2.0,
open_date=datetime.utcnow() - timedelta(minutes=20),
fee_open=0.001,
fee_close=0.001,
exchange='binance',
leverage=1.0,
trading_mode='futures',
enter_tag='a' * CUSTOM_TAG_MAX_LENGTH * 2,
exit_reason='b' * CUSTOM_TAG_MAX_LENGTH * 2,
)
Trade.session.add(trade)
Trade.commit()
trade1 = Trade.session.scalars(select(Trade)).first()
assert trade1.enter_tag == 'a' * CUSTOM_TAG_MAX_LENGTH
assert trade1.exit_reason == 'b' * CUSTOM_TAG_MAX_LENGTH
def test_recalc_trade_from_orders(fee): def test_recalc_trade_from_orders(fee):
o1_amount = 100 o1_amount = 100

View File

@ -883,6 +883,8 @@ def test_api_profit(botclient, mocker, ticker, fee, markets, is_short, expected)
'max_drawdown': ANY, 'max_drawdown': ANY,
'max_drawdown_abs': ANY, 'max_drawdown_abs': ANY,
'trading_volume': expected['trading_volume'], 'trading_volume': expected['trading_volume'],
'bot_start_timestamp': 0,
'bot_start_date': '',
} }
@ -1403,10 +1405,10 @@ def test_api_pair_candles(botclient, ohlcv_history):
]) ])
def test_api_pair_history(botclient, ohlcv_history): def test_api_pair_history(botclient, mocker):
ftbot, client = botclient ftbot, client = botclient
timeframe = '5m' timeframe = '5m'
lfm = mocker.patch('freqtrade.strategy.interface.IStrategy.load_freqAI_model')
# No pair # No pair
rc = client_get(client, rc = client_get(client,
f"{BASE_URI}/pair_history?timeframe={timeframe}" f"{BASE_URI}/pair_history?timeframe={timeframe}"
@ -1440,6 +1442,7 @@ def test_api_pair_history(botclient, ohlcv_history):
assert len(rc.json()['data']) == rc.json()['length'] assert len(rc.json()['data']) == rc.json()['length']
assert 'columns' in rc.json() assert 'columns' in rc.json()
assert 'data' in rc.json() assert 'data' in rc.json()
assert lfm.call_count == 1
assert rc.json()['pair'] == 'UNITTEST/BTC' assert rc.json()['pair'] == 'UNITTEST/BTC'
assert rc.json()['strategy'] == CURRENT_TEST_STRATEGY assert rc.json()['strategy'] == CURRENT_TEST_STRATEGY
assert rc.json()['data_start'] == '2018-01-11 00:00:00+00:00' assert rc.json()['data_start'] == '2018-01-11 00:00:00+00:00'

View File

@ -2241,8 +2241,9 @@ def test_send_msg_buy_notification_no_fiat(
('Short', 'short_signal_01', 2.0), ('Short', 'short_signal_01', 2.0),
]) ])
def test_send_msg_sell_notification_no_fiat( def test_send_msg_sell_notification_no_fiat(
default_conf, mocker, direction, enter_signal, leverage) -> None: default_conf, mocker, direction, enter_signal, leverage, time_machine) -> None:
del default_conf['fiat_display_currency'] del default_conf['fiat_display_currency']
time_machine.move_to('2022-05-02 00:00:00 +00:00', tick=False)
telegram, _, msg_mock = get_telegram_testobject(mocker, default_conf) telegram, _, msg_mock = get_telegram_testobject(mocker, default_conf)
telegram.send_msg({ telegram.send_msg({

View File

@ -82,7 +82,7 @@ class freqai_test_classifier(IStrategy):
return dataframe return dataframe
def set_freqai_targets(self, dataframe: DataFrame, metadata: Dict, **kwargs): def set_freqai_targets(self, dataframe: DataFrame, metadata: Dict, **kwargs):
self.freqai.class_names = ["down", "up"]
dataframe['&s-up_or_down'] = np.where(dataframe["close"].shift(-100) > dataframe['&s-up_or_down'] = np.where(dataframe["close"].shift(-100) >
dataframe["close"], 'up', 'down') dataframe["close"], 'up', 'down')

View File

@ -9,6 +9,7 @@ import pytest
from pandas import DataFrame from pandas import DataFrame
from freqtrade.configuration import TimeRange from freqtrade.configuration import TimeRange
from freqtrade.constants import CUSTOM_TAG_MAX_LENGTH
from freqtrade.data.dataprovider import DataProvider from freqtrade.data.dataprovider import DataProvider
from freqtrade.data.history import load_data from freqtrade.data.history import load_data
from freqtrade.enums import ExitCheckTuple, ExitType, HyperoptState, SignalDirection from freqtrade.enums import ExitCheckTuple, ExitType, HyperoptState, SignalDirection
@ -529,13 +530,13 @@ def test_custom_exit(default_conf, fee, caplog) -> None:
assert res[0].exit_reason == 'hello world' assert res[0].exit_reason == 'hello world'
caplog.clear() caplog.clear()
strategy.custom_exit = MagicMock(return_value='h' * 100) strategy.custom_exit = MagicMock(return_value='h' * CUSTOM_TAG_MAX_LENGTH * 2)
res = strategy.should_exit(trade, 1, now, res = strategy.should_exit(trade, 1, now,
enter=False, exit_=False, enter=False, exit_=False,
low=None, high=None) low=None, high=None)
assert res[0].exit_type == ExitType.CUSTOM_EXIT assert res[0].exit_type == ExitType.CUSTOM_EXIT
assert res[0].exit_flag is True assert res[0].exit_flag is True
assert res[0].exit_reason == 'h' * 64 assert res[0].exit_reason == 'h' * (CUSTOM_TAG_MAX_LENGTH)
assert log_has_re('Custom exit reason returned from custom_exit is too long.*', caplog) assert log_has_re('Custom exit reason returned from custom_exit is too long.*', caplog)

View File

@ -23,8 +23,7 @@ from freqtrade.configuration.load_config import (load_config_file, load_file, lo
from freqtrade.constants import DEFAULT_DB_DRYRUN_URL, DEFAULT_DB_PROD_URL, ENV_VAR_PREFIX from freqtrade.constants import DEFAULT_DB_DRYRUN_URL, DEFAULT_DB_PROD_URL, ENV_VAR_PREFIX
from freqtrade.enums import RunMode from freqtrade.enums import RunMode
from freqtrade.exceptions import OperationalException from freqtrade.exceptions import OperationalException
from freqtrade.loggers import (FTBufferingHandler, FTStdErrStreamHandler, _set_loggers, from freqtrade.loggers import FTBufferingHandler, _set_loggers, setup_logging, setup_logging_pre
setup_logging, setup_logging_pre)
from tests.conftest import (CURRENT_TEST_STRATEGY, log_has, log_has_re, from tests.conftest import (CURRENT_TEST_STRATEGY, log_has, log_has_re,
patched_configuration_load_config_file) patched_configuration_load_config_file)
@ -659,7 +658,7 @@ def test_set_loggers_syslog():
setup_logging(config) setup_logging(config)
assert len(logger.handlers) == 3 assert len(logger.handlers) == 3
assert [x for x in logger.handlers if type(x) == logging.handlers.SysLogHandler] assert [x for x in logger.handlers if type(x) == logging.handlers.SysLogHandler]
assert [x for x in logger.handlers if type(x) == FTStdErrStreamHandler] assert [x for x in logger.handlers if type(x) == logging.StreamHandler]
assert [x for x in logger.handlers if type(x) == FTBufferingHandler] assert [x for x in logger.handlers if type(x) == FTBufferingHandler]
# setting up logging again should NOT cause the loggers to be added a second time. # setting up logging again should NOT cause the loggers to be added a second time.
setup_logging(config) setup_logging(config)
@ -682,7 +681,7 @@ def test_set_loggers_Filehandler(tmpdir):
setup_logging(config) setup_logging(config)
assert len(logger.handlers) == 3 assert len(logger.handlers) == 3
assert [x for x in logger.handlers if type(x) == logging.handlers.RotatingFileHandler] assert [x for x in logger.handlers if type(x) == logging.handlers.RotatingFileHandler]
assert [x for x in logger.handlers if type(x) == FTStdErrStreamHandler] assert [x for x in logger.handlers if type(x) == logging.StreamHandler]
assert [x for x in logger.handlers if type(x) == FTBufferingHandler] assert [x for x in logger.handlers if type(x) == FTBufferingHandler]
# setting up logging again should NOT cause the loggers to be added a second time. # setting up logging again should NOT cause the loggers to be added a second time.
setup_logging(config) setup_logging(config)
@ -707,7 +706,7 @@ def test_set_loggers_journald(mocker):
setup_logging(config) setup_logging(config)
assert len(logger.handlers) == 3 assert len(logger.handlers) == 3
assert [x for x in logger.handlers if type(x).__name__ == "JournaldLogHandler"] assert [x for x in logger.handlers if type(x).__name__ == "JournaldLogHandler"]
assert [x for x in logger.handlers if type(x) == FTStdErrStreamHandler] assert [x for x in logger.handlers if type(x) == logging.StreamHandler]
# reset handlers to not break pytest # reset handlers to not break pytest
logger.handlers = orig_handlers logger.handlers = orig_handlers

View File

@ -10,6 +10,8 @@ from freqtrade.exceptions import OperationalException
def test_parse_timerange_incorrect(): def test_parse_timerange_incorrect():
timerange = TimeRange.parse_timerange('')
assert timerange == TimeRange(None, None, 0, 0)
timerange = TimeRange.parse_timerange('20100522-') timerange = TimeRange.parse_timerange('20100522-')
assert TimeRange('date', None, 1274486400, 0) == timerange assert TimeRange('date', None, 1274486400, 0) == timerange
assert timerange.timerange_str == '20100522-' assert timerange.timerange_str == '20100522-'