Merge branch 'develop' into align_userdata

This commit is contained in:
Matthias 2019-08-18 15:00:12 +02:00
commit 0a478bc0dc
63 changed files with 1517 additions and 941 deletions

17
.dependabot/config.yml Normal file
View File

@ -0,0 +1,17 @@
version: 1
update_configs:
- package_manager: "python"
directory: "/"
update_schedule: "weekly"
allowed_updates:
- match:
update_type: "all"
target_branch: "develop"
- package_manager: "docker"
directory: "/"
update_schedule: "daily"
allowed_updates:
- match:
update_type: "all"

View File

@ -1,37 +0,0 @@
# autogenerated pyup.io config file
# see https://pyup.io/docs/configuration/ for all available options
# configure updates globally
# default: all
# allowed: all, insecure, False
update: all
# configure dependency pinning globally
# default: True
# allowed: True, False
pin: True
# update schedule
# default: empty
# allowed: "every day", "every week", ..
schedule: "every week"
search: False
# Specify requirement files by hand, default is empty
# default: empty
# allowed: list
requirements:
- requirements.txt
- requirements-dev.txt
- requirements-plot.txt
- requirements-common.txt
# configure the branch prefix the bot is using
# default: pyup-
branch_prefix: pyup/
# allow to close stale PRs
# default: True
close_prs: True

View File

@ -1,4 +1,4 @@
FROM python:3.7.3-slim-stretch FROM python:3.7.4-slim-stretch
RUN apt-get update \ RUN apt-get update \
&& apt-get -y install curl build-essential libssl-dev \ && apt-get -y install curl build-essential libssl-dev \

View File

@ -2,7 +2,7 @@
This page explains the different parameters of the bot and how to run it. This page explains the different parameters of the bot and how to run it.
!Note: !!! Note:
If you've used `setup.sh`, don't forget to activate your virtual environment (`source .env/bin/activate`) before running freqtrade commands. If you've used `setup.sh`, don't forget to activate your virtual environment (`source .env/bin/activate`) before running freqtrade commands.
@ -48,21 +48,24 @@ optional arguments:
``` ```
### How to use a different configuration file? ### How to specify which configuration file be used?
The bot allows you to select which configuration file you want to use. Per The bot allows you to select which configuration file you want to use by means of
default, the bot will load the file `./config.json` the `-c/--config` command line option:
```bash ```bash
freqtrade -c path/far/far/away/config.json freqtrade -c path/far/far/away/config.json
``` ```
Per default, the bot loads the `config.json` configuration file from the current
working directory.
### How to use multiple configuration files? ### How to use multiple configuration files?
The bot allows you to use multiple configuration files by specifying multiple The bot allows you to use multiple configuration files by specifying multiple
`-c/--config` configuration options in the command line. Configuration parameters `-c/--config` options in the command line. Configuration parameters
defined in the last configuration file override parameters with the same name defined in the latter configuration files override parameters with the same name
defined in the previous configuration file specified in the command line. defined in the previous configuration files specified in the command line earlier.
For example, you can make a separate configuration file with your key and secrete For example, you can make a separate configuration file with your key and secrete
for the Exchange you use for trading, specify default configuration file with for the Exchange you use for trading, specify default configuration file with
@ -235,7 +238,7 @@ usage: freqtrade hyperopt [-h] [-i TICKER_INTERVAL] [--timerange TIMERANGE]
[--customhyperopt NAME] [--hyperopt-path PATH] [--customhyperopt NAME] [--hyperopt-path PATH]
[--eps] [-e INT] [--eps] [-e INT]
[-s {all,buy,sell,roi,stoploss} [{all,buy,sell,roi,stoploss} ...]] [-s {all,buy,sell,roi,stoploss} [{all,buy,sell,roi,stoploss} ...]]
[--dmmp] [--print-all] [-j JOBS] [--dmmp] [--print-all] [--no-color] [-j JOBS]
[--random-state INT] [--min-trades INT] [--continue] [--random-state INT] [--min-trades INT] [--continue]
[--hyperopt-loss NAME] [--hyperopt-loss NAME]
@ -271,6 +274,8 @@ optional arguments:
(same as setting `max_open_trades` to a very high (same as setting `max_open_trades` to a very high
number). number).
--print-all Print all results, not only the best ones. --print-all Print all results, not only the best ones.
--no-color Disable colorization of hyperopt results. May be
useful if you are redirecting output to a file.
-j JOBS, --job-workers JOBS -j JOBS, --job-workers JOBS
The number of concurrently running jobs for The number of concurrently running jobs for
hyperoptimization (hyperopt worker processes). If -1 hyperoptimization (hyperopt worker processes). If -1
@ -284,17 +289,18 @@ optional arguments:
--continue Continue hyperopt from previous runs. By default, --continue Continue hyperopt from previous runs. By default,
temporary files will be removed and hyperopt will temporary files will be removed and hyperopt will
start from scratch. start from scratch.
--hyperopt-loss NAME --hyperopt-loss NAME Specify the class name of the hyperopt loss function
Specify the class name of the hyperopt loss function
class (IHyperOptLoss). Different functions can class (IHyperOptLoss). Different functions can
generate completely different results, since the generate completely different results, since the
target for optimization is different. (default: target for optimization is different. Built-in
`DefaultHyperOptLoss`). Hyperopt-loss-functions are: DefaultHyperOptLoss,
OnlyProfitHyperOptLoss, SharpeHyperOptLoss.
(default: `DefaultHyperOptLoss`).
``` ```
## Edge commands ## Edge commands
To know your trade expectacny and winrate against historical data, you can use Edge. To know your trade expectancy and winrate against historical data, you can use Edge.
``` ```
usage: freqtrade edge [-h] [-i TICKER_INTERVAL] [--timerange TIMERANGE] usage: freqtrade edge [-h] [-i TICKER_INTERVAL] [--timerange TIMERANGE]

View File

@ -1,15 +1,34 @@
# Configure the bot # Configure the bot
This page explains how to configure your `config.json` file. This page explains how to configure the bot.
## Setup config.json ## The Freqtrade configuration file
We recommend to copy and use the `config.json.example` as a template The bot uses a set of configuration parameters during its operation that all together conform the bot configuration. It normally reads its configuration from a file (Freqtrade configuration file).
Per default, the bot loads configuration from the `config.json` file located in the current working directory.
You can change the name of the configuration file used by the bot with the `-c/--config` command line option.
In some advanced use cases, multiple configuration files can be specified and used by the bot or the bot can read its configuration parameters from the process standard input stream.
If you used the [Quick start](installation.md/#quick-start) method for installing
the bot, the installation script should have already created the default configuration file (`config.json`) for you.
If default configuration file is not created we recommend you to copy and use the `config.json.example` as a template
for your bot configuration. for your bot configuration.
The table below will list all configuration parameters. The Freqtrade configuration file is to be written in the JSON format.
Mandatory Parameters are marked as **Required**. Additionally to the standard JSON syntax, you may use one-line `// ...` and multi-line `/* ... */` comments in your configuration files and trailing commas in the lists of parameters.
Do not worry if you are not familiar with JSON format -- simply open the configuration file with an editor of your choice, make some changes to the parameters you need, save your changes and, finally, restart the bot or, if it was previously stopped, run it again with the changes you made to the configuration. The bot validates syntax of the configuration file at startup and will warn you if you made any errors editing it.
## Configuration parameters
The table below will list all configuration parameters available.
Mandatory parameters are marked as **Required**.
| Command | Default | Description | | Command | Default | Description |
|----------|---------|-------------| |----------|---------|-------------|
@ -53,6 +72,7 @@ Mandatory Parameters are marked as **Required**.
| `experimental.use_sell_signal` | false | Use your sell strategy in addition of the `minimal_roi`. [Strategy Override](#parameters-in-the-strategy). | `experimental.use_sell_signal` | false | Use your sell strategy in addition of the `minimal_roi`. [Strategy Override](#parameters-in-the-strategy).
| `experimental.sell_profit_only` | false | Waits until you have made a positive profit before taking a sell decision. [Strategy Override](#parameters-in-the-strategy). | `experimental.sell_profit_only` | false | Waits until you have made a positive profit before taking a sell decision. [Strategy Override](#parameters-in-the-strategy).
| `experimental.ignore_roi_if_buy_signal` | false | Does not sell if the buy-signal is still active. Takes preference over `minimal_roi` and `use_sell_signal`. [Strategy Override](#parameters-in-the-strategy). | `experimental.ignore_roi_if_buy_signal` | false | Does not sell if the buy-signal is still active. Takes preference over `minimal_roi` and `use_sell_signal`. [Strategy Override](#parameters-in-the-strategy).
| `experimental.block_bad_exchanges` | true | Block exchanges known to not work with freqtrade. Leave on default unless you want to test if that exchange works now.
| `pairlist.method` | StaticPairList | Use static or dynamic volume-based pairlist. [More information below](#dynamic-pairlists). | `pairlist.method` | StaticPairList | Use static or dynamic volume-based pairlist. [More information below](#dynamic-pairlists).
| `pairlist.config` | None | Additional configuration for dynamic pairlists. [More information below](#dynamic-pairlists). | `pairlist.config` | None | Additional configuration for dynamic pairlists. [More information below](#dynamic-pairlists).
| `telegram.enabled` | true | **Required.** Enable or not the usage of Telegram. | `telegram.enabled` | true | **Required.** Enable or not the usage of Telegram.

View File

@ -31,6 +31,16 @@ df = load_trades_from_db("sqlite:///tradesv3.sqlite")
df.groupby("pair")["sell_reason"].value_counts() df.groupby("pair")["sell_reason"].value_counts()
``` ```
### Load multiple configuration files
This option can be usefull to inspect the results of passing in multiple configs in case of problems
``` python
from freqtrade.configuration import Configuration
config = Configuration.from_files(["config1.json", "config2.json"])
print(config)
```
## Strategy debugging example ## Strategy debugging example
Debugging a strategy can be time-consuming. FreqTrade offers helper functions to visualize raw data. Debugging a strategy can be time-consuming. FreqTrade offers helper functions to visualize raw data.

View File

@ -17,6 +17,29 @@ Alternatively (if your system is not supported by the setup.sh script), follow t
This will install all required tools for development, including `pytest`, `flake8`, `mypy`, and `coveralls`. This will install all required tools for development, including `pytest`, `flake8`, `mypy`, and `coveralls`.
### Tests
New code should be covered by basic unittests. Depending on the complexity of the feature, Reviewers may request more in-depth unittests.
If necessary, the Freqtrade team can assist and give guidance with writing good tests (however please don't expect anyone to write the tests for you).
#### Checking log content in tests
Freqtrade uses 2 main methods to check log content in tests, `log_has()` and `log_has_re()` (to check using regex, in case of dynamic log-messages).
These are available from `conftest.py` and can be imported in any test module.
A sample check looks as follows:
``` python
from freqtrade.tests.conftest import log_has, log_has_re
def test_method_to_test(caplog):
method_to_test()
assert log_has("This event happened", caplog)
# Check regex with trailing number ...
assert log_has_re(r"This dynamic event happened and produced \d+", caplog)
```
## Modules ## Modules
### Dynamic Pairlist ### Dynamic Pairlist

View File

@ -26,6 +26,10 @@ To update the image, simply run the above commands again and restart your runnin
Should you require additional libraries, please [build the image yourself](#build-your-own-docker-image). Should you require additional libraries, please [build the image yourself](#build-your-own-docker-image).
!!! Note Docker image update frequency
The official docker images with tags `master`, `develop` and `latest` are automatically rebuild once a week to keep the base image uptodate.
In addition to that, every merge to `develop` will trigger a rebuild for `develop` and `latest`.
### Prepare the configuration files ### Prepare the configuration files
Even though you will use docker, you'll still need some files from the github repository. Even though you will use docker, you'll still need some files from the github repository.

View File

@ -164,7 +164,11 @@ By default, FreqTrade uses a loss function, which has been with freqtrade since
A different loss function can be specified by using the `--hyperopt-loss <Class-name>` argument. A different loss function can be specified by using the `--hyperopt-loss <Class-name>` argument.
This class should be in its own file within the `user_data/hyperopts/` directory. This class should be in its own file within the `user_data/hyperopts/` directory.
Currently, the following loss functions are builtin: `DefaultHyperOptLoss` (default legacy Freqtrade hyperoptimization loss function), `SharpeHyperOptLoss` (optimizes Sharpe Ratio calculated on the trade returns) and `OnlyProfitHyperOptLoss` (which takes only amount of profit into consideration). Currently, the following loss functions are builtin:
* `DefaultHyperOptLoss` (default legacy Freqtrade hyperoptimization loss function)
* `OnlyProfitHyperOptLoss` (which takes only amount of profit into consideration)
* `SharpeHyperOptLoss` (optimizes Sharpe Ratio calculated on the trade returns)
### Creating and using a custom loss function ### Creating and using a custom loss function
@ -348,6 +352,10 @@ def populate_buy_trend(self, dataframe: DataFrame) -> DataFrame:
return dataframe return dataframe
``` ```
By default, hyperopt prints colorized results -- epochs with positive profit are printed in the green color. This highlighting helps you find epochs that can be interesting for later analysis. Epochs with zero total profit or with negative profits (losses) are printed in the normal color. If you do not need colorization of results (for instance, when you are redirecting hyperopt output to a file) you can switch colorization off by specifying the `--no-color` option in the command line.
You can use the `--print-all` command line option if you would like to see all results in the hyperopt output, not only the best ones. When `--print-all` is used, current best results are also colorized by default -- they are printed in bold (bright) style. This can also be switched off with the `--no-color` command line option.
### Understand Hyperopt ROI results ### Understand Hyperopt ROI results
If you are optimizing ROI (i.e. if optimization search-space contains 'all' or 'roi'), your result will look as follows and include a ROI table: If you are optimizing ROI (i.e. if optimization search-space contains 'all' or 'roi'), your result will look as follows and include a ROI table:

View File

@ -309,8 +309,10 @@ if self.dp:
dataframe['best_bid'] = ob['bids'][0][0] dataframe['best_bid'] = ob['bids'][0][0]
dataframe['best_ask'] = ob['asks'][0][0] dataframe['best_ask'] = ob['asks'][0][0]
``` ```
!Warning The order book is not part of the historic data which means backtesting and hyperopt will not work if this
method is used. !!! Warning
The order book is not part of the historic data which means backtesting and hyperopt will not work if this
method is used.
#### Available Pairs #### Available Pairs

View File

@ -1,2 +1,3 @@
from freqtrade.configuration.arguments import Arguments, TimeRange # noqa: F401 from freqtrade.configuration.arguments import Arguments # noqa: F401
from freqtrade.configuration.timerange import TimeRange # noqa: F401
from freqtrade.configuration.configuration import Configuration # noqa: F401 from freqtrade.configuration.configuration import Configuration # noqa: F401

View File

@ -2,10 +2,8 @@
This module contains the argument manager class This module contains the argument manager class
""" """
import argparse import argparse
import re from typing import List, Optional
from typing import List, NamedTuple, Optional
import arrow
from freqtrade.configuration.cli_options import AVAILABLE_CLI_OPTIONS from freqtrade.configuration.cli_options import AVAILABLE_CLI_OPTIONS
from freqtrade import constants from freqtrade import constants
@ -23,7 +21,8 @@ ARGS_BACKTEST = ARGS_COMMON_OPTIMIZE + ["position_stacking", "use_max_market_pos
ARGS_HYPEROPT = ARGS_COMMON_OPTIMIZE + ["hyperopt", "hyperopt_path", ARGS_HYPEROPT = ARGS_COMMON_OPTIMIZE + ["hyperopt", "hyperopt_path",
"position_stacking", "epochs", "spaces", "position_stacking", "epochs", "spaces",
"use_max_market_positions", "print_all", "hyperopt_jobs", "use_max_market_positions", "print_all",
"print_colorized", "print_json", "hyperopt_jobs",
"hyperopt_random_state", "hyperopt_min_trades", "hyperopt_random_state", "hyperopt_min_trades",
"hyperopt_continue", "hyperopt_loss"] "hyperopt_continue", "hyperopt_loss"]
@ -44,18 +43,6 @@ ARGS_PLOT_PROFIT = (ARGS_COMMON + ARGS_STRATEGY +
["pairs", "timerange", "export", "exportfilename", "db_url", "trade_source"]) ["pairs", "timerange", "export", "exportfilename", "db_url", "trade_source"])
class TimeRange(NamedTuple):
"""
NamedTuple defining timerange inputs.
[start/stop]type defines if [start/stop]ts shall be used.
if *type is None, don't use corresponding startvalue.
"""
starttype: Optional[str] = None
stoptype: Optional[str] = None
startts: int = 0
stopts: int = 0
class Arguments(object): class Arguments(object):
""" """
Arguments Class. Manage the arguments received by the cli Arguments Class. Manage the arguments received by the cli
@ -138,45 +125,3 @@ class Arguments(object):
) )
list_exchanges_cmd.set_defaults(func=start_list_exchanges) list_exchanges_cmd.set_defaults(func=start_list_exchanges)
self._build_args(optionlist=ARGS_LIST_EXCHANGES, parser=list_exchanges_cmd) self._build_args(optionlist=ARGS_LIST_EXCHANGES, parser=list_exchanges_cmd)
@staticmethod
def parse_timerange(text: Optional[str]) -> TimeRange:
"""
Parse the value of the argument --timerange to determine what is the range desired
:param text: value from --timerange
:return: Start and End range period
"""
if text is None:
return TimeRange(None, None, 0, 0)
syntax = [(r'^-(\d{8})$', (None, 'date')),
(r'^(\d{8})-$', ('date', None)),
(r'^(\d{8})-(\d{8})$', ('date', 'date')),
(r'^-(\d{10})$', (None, 'date')),
(r'^(\d{10})-$', ('date', None)),
(r'^(\d{10})-(\d{10})$', ('date', 'date')),
(r'^(-\d+)$', (None, 'line')),
(r'^(\d+)-$', ('line', None)),
(r'^(\d+)-(\d+)$', ('index', 'index'))]
for rex, stype in syntax:
# Apply the regular expression to text
match = re.match(rex, text)
if match: # Regex has matched
rvals = match.groups()
index = 0
start: int = 0
stop: int = 0
if stype[0]:
starts = rvals[index]
if stype[0] == 'date' and len(starts) == 8:
start = arrow.get(starts, 'YYYYMMDD').timestamp
else:
start = int(starts)
index += 1
if stype[1]:
stops = rvals[index]
if stype[1] == 'date' and len(stops) == 8:
stop = arrow.get(stops, 'YYYYMMDD').timestamp
else:
stop = int(stops)
return TimeRange(stype[0], stype[1], start, stop)
raise Exception('Incorrect syntax for timerange "%s"' % text)

View File

@ -2,9 +2,9 @@ import logging
from typing import Any, Dict from typing import Any, Dict
from freqtrade import OperationalException from freqtrade import OperationalException
from freqtrade.exchange import (is_exchange_bad, is_exchange_available, from freqtrade.exchange import (available_exchanges, get_exchange_bad_reason,
is_exchange_officially_supported, available_exchanges) is_exchange_available, is_exchange_bad,
is_exchange_officially_supported)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -31,9 +31,8 @@ def check_exchange(config: Dict[str, Any], check_for_bad: bool = True) -> bool:
) )
if check_for_bad and is_exchange_bad(exchange): if check_for_bad and is_exchange_bad(exchange):
logger.warning(f'Exchange "{exchange}" is known to not work with the bot yet. ' raise OperationalException(f'Exchange "{exchange}" is known to not work with the bot yet. '
f'Use it only for development and testing purposes.') f'Reason: {get_exchange_bad_reason(exchange)}')
return False
if is_exchange_officially_supported(exchange): if is_exchange_officially_supported(exchange):
logger.info(f'Exchange "{exchange}" is officially supported ' logger.info(f'Exchange "{exchange}" is officially supported '

View File

@ -196,6 +196,19 @@ AVAILABLE_CLI_OPTIONS = {
action='store_true', action='store_true',
default=False, default=False,
), ),
"print_colorized": Arg(
'--no-color',
help='Disable colorization of hyperopt results. May be useful if you are '
'redirecting output to a file.',
action='store_false',
default=True,
),
"print_json": Arg(
'--print-json',
help='Print best result detailization in JSON format.',
action='store_true',
default=False,
),
"hyperopt_jobs": Arg( "hyperopt_jobs": Arg(
'-j', '--job-workers', '-j', '--job-workers',
help='The number of concurrently running jobs for hyperoptimization ' help='The number of concurrently running jobs for hyperoptimization '
@ -231,7 +244,9 @@ AVAILABLE_CLI_OPTIONS = {
'--hyperopt-loss', '--hyperopt-loss',
help='Specify the class name of the hyperopt loss function class (IHyperOptLoss). ' help='Specify the class name of the hyperopt loss function class (IHyperOptLoss). '
'Different functions can generate completely different results, ' 'Different functions can generate completely different results, '
'since the target for optimization is different. (default: `%(default)s`).', 'since the target for optimization is different. Built-in Hyperopt-loss-functions are: '
'DefaultHyperOptLoss, OnlyProfitHyperOptLoss, SharpeHyperOptLoss.'
'(default: `%(default)s`).',
metavar='NAME', metavar='NAME',
default=constants.DEFAULT_HYPEROPT_LOSS, default=constants.DEFAULT_HYPEROPT_LOSS,
), ),

View File

@ -5,11 +5,12 @@ import logging
import warnings import warnings
from argparse import Namespace from argparse import Namespace
from pathlib import Path from pathlib import Path
from typing import Any, Callable, Dict, Optional from typing import Any, Callable, Dict, List, Optional
from freqtrade import OperationalException, constants from freqtrade import OperationalException, constants
from freqtrade.configuration.check_exchange import check_exchange from freqtrade.configuration.check_exchange import check_exchange
from freqtrade.configuration.directory_operations import create_datadir, create_userdata_dir from freqtrade.configuration.directory_operations import (create_datadir,
create_userdata_dir)
from freqtrade.configuration.json_schema import validate_config_schema from freqtrade.configuration.json_schema import validate_config_schema
from freqtrade.configuration.load_config import load_config_file from freqtrade.configuration.load_config import load_config_file
from freqtrade.loggers import setup_logging from freqtrade.loggers import setup_logging
@ -40,43 +41,43 @@ class Configuration(object):
return self.config return self.config
def _load_config_files(self) -> Dict[str, Any]: @staticmethod
def from_files(files: List[str]) -> Dict[str, Any]:
""" """
Iterate through the config files passed in the args, Iterate through the config files passed in, loading all of them
loading all of them and merging their contents. and merging their contents.
Files are loaded in sequence, parameters in later configuration files
override the same parameter from an earlier file (last definition wins).
:param files: List of file paths
:return: configuration dictionary
""" """
# Keep this method as staticmethod, so it can be used from interactive environments
config: Dict[str, Any] = {} config: Dict[str, Any] = {}
# We expect here a list of config filenames # We expect here a list of config filenames
for path in self.args.config: for path in files:
logger.info('Using config: %s ...', path) logger.info(f'Using config: {path} ...')
# Merge config options, overwriting old values # Merge config options, overwriting old values
config = deep_merge_dicts(load_config_file(path), config) config = deep_merge_dicts(load_config_file(path), config)
return config # Normalize config
def _normalize_config(self, config: Dict[str, Any]) -> None:
"""
Make config more canonical -- i.e. for example add missing parts that we expect
to be normally in it...
"""
if 'internals' not in config: if 'internals' not in config:
config['internals'] = {} config['internals'] = {}
# validate configuration before returning
logger.info('Validating configuration ...')
validate_config_schema(config)
return config
def load_config(self) -> Dict[str, Any]: def load_config(self) -> Dict[str, Any]:
""" """
Extract information for sys.argv and load the bot configuration Extract information for sys.argv and load the bot configuration
:return: Configuration dictionary :return: Configuration dictionary
""" """
# Load all configs # Load all configs
config: Dict[str, Any] = self._load_config_files() config: Dict[str, Any] = Configuration.from_files(self.args.config)
# Make resulting config more canonical
self._normalize_config(config)
logger.info('Validating configuration ...')
validate_config_schema(config)
self._validate_config_consistency(config) self._validate_config_consistency(config)
@ -146,7 +147,7 @@ class Configuration(object):
config['internals'].update({'sd_notify': True}) config['internals'].update({'sd_notify': True})
# Check if the exchange set by the user is supported # Check if the exchange set by the user is supported
check_exchange(config) check_exchange(config, config.get('experimental', {}).get('block_bad_exchanges', True))
def _process_datadir_options(self, config: Dict[str, Any]) -> None: def _process_datadir_options(self, config: Dict[str, Any]) -> None:
""" """
@ -244,6 +245,15 @@ class Configuration(object):
self._args_to_config(config, argname='print_all', self._args_to_config(config, argname='print_all',
logstring='Parameter --print-all detected ...') logstring='Parameter --print-all detected ...')
if 'print_colorized' in self.args and not self.args.print_colorized:
logger.info('Parameter --no-color detected ...')
config.update({'print_colorized': False})
else:
config.update({'print_colorized': True})
self._args_to_config(config, argname='print_json',
logstring='Parameter --print-json detected ...')
self._args_to_config(config, argname='hyperopt_jobs', self._args_to_config(config, argname='hyperopt_jobs',
logstring='Parameter -j/--job-workers detected: {}') logstring='Parameter -j/--job-workers detected: {}')
@ -280,7 +290,7 @@ class Configuration(object):
if not self.runmode: if not self.runmode:
# Handle real mode, infer dry/live from config # Handle real mode, infer dry/live from config
self.runmode = RunMode.DRY_RUN if config.get('dry_run', True) else RunMode.LIVE self.runmode = RunMode.DRY_RUN if config.get('dry_run', True) else RunMode.LIVE
logger.info("Runmode set to {self.runmode}.") logger.info(f"Runmode set to {self.runmode}.")
config.update({'runmode': self.runmode}) config.update({'runmode': self.runmode})

View File

@ -1,7 +1,7 @@
""" """
This module contain functions to load the configuration file This module contain functions to load the configuration file
""" """
import json import rapidjson
import logging import logging
import sys import sys
from typing import Any, Dict from typing import Any, Dict
@ -12,6 +12,9 @@ from freqtrade import OperationalException
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
CONFIG_PARSE_MODE = rapidjson.PM_COMMENTS | rapidjson.PM_TRAILING_COMMAS
def load_config_file(path: str) -> Dict[str, Any]: def load_config_file(path: str) -> Dict[str, Any]:
""" """
Loads a config file from the given path Loads a config file from the given path
@ -21,7 +24,7 @@ def load_config_file(path: str) -> Dict[str, Any]:
try: try:
# Read config from stdin if requested in the options # Read config from stdin if requested in the options
with open(path) if path != '-' else sys.stdin as file: with open(path) if path != '-' else sys.stdin as file:
config = json.load(file) config = rapidjson.load(file, parse_mode=CONFIG_PARSE_MODE)
except FileNotFoundError: except FileNotFoundError:
raise OperationalException( raise OperationalException(
f'Config file "{path}" not found!' f'Config file "{path}" not found!'

View File

@ -0,0 +1,70 @@
"""
This module contains the argument manager class
"""
import re
from typing import Optional
import arrow
class TimeRange():
"""
object defining timerange inputs.
[start/stop]type defines if [start/stop]ts shall be used.
if *type is None, don't use corresponding startvalue.
"""
def __init__(self, starttype: Optional[str] = None, stoptype: Optional[str] = None,
startts: int = 0, stopts: int = 0):
self.starttype: Optional[str] = starttype
self.stoptype: Optional[str] = stoptype
self.startts: int = startts
self.stopts: int = stopts
def __eq__(self, other):
"""Override the default Equals behavior"""
return (self.starttype == other.starttype and self.stoptype == other.stoptype
and self.startts == other.startts and self.stopts == other.stopts)
@staticmethod
def parse_timerange(text: Optional[str]):
"""
Parse the value of the argument --timerange to determine what is the range desired
:param text: value from --timerange
:return: Start and End range period
"""
if text is None:
return TimeRange(None, None, 0, 0)
syntax = [(r'^-(\d{8})$', (None, 'date')),
(r'^(\d{8})-$', ('date', None)),
(r'^(\d{8})-(\d{8})$', ('date', 'date')),
(r'^-(\d{10})$', (None, 'date')),
(r'^(\d{10})-$', ('date', None)),
(r'^(\d{10})-(\d{10})$', ('date', 'date')),
(r'^(-\d+)$', (None, 'line')),
(r'^(\d+)-$', ('line', None)),
(r'^(\d+)-(\d+)$', ('index', 'index'))]
for rex, stype in syntax:
# Apply the regular expression to text
match = re.match(rex, text)
if match: # Regex has matched
rvals = match.groups()
index = 0
start: int = 0
stop: int = 0
if stype[0]:
starts = rvals[index]
if stype[0] == 'date' and len(starts) == 8:
start = arrow.get(starts, 'YYYYMMDD').timestamp
else:
start = int(starts)
index += 1
if stype[1]:
stops = rvals[index]
if stype[1] == 'date' and len(stops) == 8:
stop = arrow.get(stops, 'YYYYMMDD').timestamp
else:
stop = int(stops)
return TimeRange(stype[0], stype[1], start, stop)
raise Exception('Incorrect syntax for timerange "%s"' % text)

View File

@ -44,36 +44,49 @@ class DataProvider():
def ohlcv(self, pair: str, ticker_interval: str = None, copy: bool = True) -> DataFrame: def ohlcv(self, pair: str, ticker_interval: str = None, copy: bool = True) -> DataFrame:
""" """
get ohlcv data for the given pair as DataFrame Get ohlcv data for the given pair as DataFrame
Please check `available_pairs` to verify which pairs are currently cached. Please use the `available_pairs` method to verify which pairs are currently cached.
:param pair: pair to get the data for :param pair: pair to get the data for
:param ticker_interval: ticker_interval to get pair for :param ticker_interval: ticker interval to get data for
:param copy: copy dataframe before returning. :param copy: copy dataframe before returning if True.
Use false only for RO operations (where the dataframe is not modified) Use False only for read-only operations (where the dataframe is not modified)
""" """
if self.runmode in (RunMode.DRY_RUN, RunMode.LIVE): if self.runmode in (RunMode.DRY_RUN, RunMode.LIVE):
if ticker_interval: return self._exchange.klines((pair, ticker_interval or self._config['ticker_interval']),
pairtick = (pair, ticker_interval) copy=copy)
else:
pairtick = (pair, self._config['ticker_interval'])
return self._exchange.klines(pairtick, copy=copy)
else: else:
return DataFrame() return DataFrame()
def historic_ohlcv(self, pair: str, ticker_interval: str) -> DataFrame: def historic_ohlcv(self, pair: str, ticker_interval: str = None) -> DataFrame:
""" """
get stored historic ohlcv data Get stored historic ohlcv data
:param pair: pair to get the data for :param pair: pair to get the data for
:param ticker_interval: ticker_interval to get pair for :param ticker_interval: ticker interval to get data for
""" """
return load_pair_history(pair=pair, return load_pair_history(pair=pair,
ticker_interval=ticker_interval, ticker_interval=ticker_interval or self._config['ticker_interval'],
refresh_pairs=False, refresh_pairs=False,
datadir=Path(self._config['datadir']) if self._config.get( datadir=Path(self._config['datadir']) if self._config.get(
'datadir') else None 'datadir') else None
) )
def get_pair_dataframe(self, pair: str, ticker_interval: str = None) -> DataFrame:
"""
Return pair ohlcv data, either live or cached historical -- depending
on the runmode.
:param pair: pair to get the data for
:param ticker_interval: ticker interval to get data for
"""
if self.runmode in (RunMode.DRY_RUN, RunMode.LIVE):
# Get live ohlcv data.
data = self.ohlcv(pair=pair, ticker_interval=ticker_interval)
else:
# Get historic ohlcv data (cached on disk).
data = self.historic_ohlcv(pair=pair, ticker_interval=ticker_interval)
if len(data) == 0:
logger.warning(f"No data found for ({pair}, {ticker_interval}).")
return data
def ticker(self, pair: str): def ticker(self, pair: str):
""" """
Return last ticker data Return last ticker data

View File

@ -252,10 +252,11 @@ def download_pair_history(datadir: Optional[Path],
logger.debug("Current End: %s", misc.format_ms_time(data[-1][0]) if data else 'None') logger.debug("Current End: %s", misc.format_ms_time(data[-1][0]) if data else 'None')
# Default since_ms to 30 days if nothing is given # Default since_ms to 30 days if nothing is given
new_data = exchange.get_history(pair=pair, ticker_interval=ticker_interval, new_data = exchange.get_historic_ohlcv(pair=pair, ticker_interval=ticker_interval,
since_ms=since_ms if since_ms since_ms=since_ms if since_ms
else else
int(arrow.utcnow().shift(days=-30).float_timestamp) * 1000) int(arrow.utcnow().shift(
days=-30).float_timestamp) * 1000)
data.extend(new_data) data.extend(new_data)
logger.debug("New Start: %s", misc.format_ms_time(data[0][0])) logger.debug("New Start: %s", misc.format_ms_time(data[0][0]))

View File

@ -10,7 +10,7 @@ import utils_find_1st as utf1st
from pandas import DataFrame from pandas import DataFrame
from freqtrade import constants, OperationalException from freqtrade import constants, OperationalException
from freqtrade.configuration import Arguments, TimeRange from freqtrade.configuration import TimeRange
from freqtrade.data import history from freqtrade.data import history
from freqtrade.strategy.interface import SellType from freqtrade.strategy.interface import SellType
@ -75,7 +75,7 @@ class Edge():
self._stoploss_range_step self._stoploss_range_step
) )
self._timerange: TimeRange = Arguments.parse_timerange("%s-" % arrow.now().shift( self._timerange: TimeRange = TimeRange.parse_timerange("%s-" % arrow.now().shift(
days=-1 * self._since_number_of_days).format('YYYYMMDD')) days=-1 * self._since_number_of_days).format('YYYYMMDD'))
self.fee = self.exchange.get_fee() self.fee = self.exchange.get_fee()

View File

@ -1,10 +1,13 @@
from freqtrade.exchange.exchange import Exchange # noqa: F401 from freqtrade.exchange.exchange import Exchange # noqa: F401
from freqtrade.exchange.exchange import (is_exchange_bad, # noqa: F401 from freqtrade.exchange.exchange import (get_exchange_bad_reason, # noqa: F401
is_exchange_bad,
is_exchange_available, is_exchange_available,
is_exchange_officially_supported, is_exchange_officially_supported,
available_exchanges) available_exchanges)
from freqtrade.exchange.exchange import (timeframe_to_seconds, # noqa: F401 from freqtrade.exchange.exchange import (timeframe_to_seconds, # noqa: F401
timeframe_to_minutes, timeframe_to_minutes,
timeframe_to_msecs) timeframe_to_msecs,
timeframe_to_next_date,
timeframe_to_prev_date)
from freqtrade.exchange.kraken import Kraken # noqa: F401 from freqtrade.exchange.kraken import Kraken # noqa: F401
from freqtrade.exchange.binance import Binance # noqa: F401 from freqtrade.exchange.binance import Binance # noqa: F401

View File

@ -6,7 +6,7 @@ import asyncio
import inspect import inspect
import logging import logging
from copy import deepcopy from copy import deepcopy
from datetime import datetime from datetime import datetime, timezone
from math import ceil, floor from math import ceil, floor
from random import randint from random import randint
from typing import Any, Dict, List, Optional, Tuple from typing import Any, Dict, List, Optional, Tuple
@ -25,6 +25,11 @@ logger = logging.getLogger(__name__)
API_RETRY_COUNT = 4 API_RETRY_COUNT = 4
BAD_EXCHANGES = {
"bitmex": "Various reasons",
"bitstamp": "Does not provide history. "
"Details in https://github.com/freqtrade/freqtrade/issues/1983",
}
def retrier_async(f): def retrier_async(f):
@ -371,7 +376,7 @@ class Exchange(object):
'side': side, 'side': side,
'remaining': amount, 'remaining': amount,
'datetime': arrow.utcnow().isoformat(), 'datetime': arrow.utcnow().isoformat(),
'status': "open", 'status': "closed" if ordertype == "market" else "open",
'fee': None, 'fee': None,
"info": {} "info": {}
} }
@ -541,19 +546,24 @@ class Exchange(object):
logger.info("returning cached ticker-data for %s", pair) logger.info("returning cached ticker-data for %s", pair)
return self._cached_ticker[pair] return self._cached_ticker[pair]
def get_history(self, pair: str, ticker_interval: str, def get_historic_ohlcv(self, pair: str, ticker_interval: str,
since_ms: int) -> List: since_ms: int) -> List:
""" """
Gets candle history using asyncio and returns the list of candles. Gets candle history using asyncio and returns the list of candles.
Handles all async doing. Handles all async doing.
Async over one pair, assuming we get `_ohlcv_candle_limit` candles per call.
:param pair: Pair to download
:param ticker_interval: Interval to get
:param since_ms: Timestamp in milliseconds to get history from
:returns List of tickers
""" """
return asyncio.get_event_loop().run_until_complete( return asyncio.get_event_loop().run_until_complete(
self._async_get_history(pair=pair, ticker_interval=ticker_interval, self._async_get_historic_ohlcv(pair=pair, ticker_interval=ticker_interval,
since_ms=since_ms)) since_ms=since_ms))
async def _async_get_history(self, pair: str, async def _async_get_historic_ohlcv(self, pair: str,
ticker_interval: str, ticker_interval: str,
since_ms: int) -> List: since_ms: int) -> List:
one_call = timeframe_to_msecs(ticker_interval) * self._ohlcv_candle_limit one_call = timeframe_to_msecs(ticker_interval) * self._ohlcv_candle_limit
logger.debug( logger.debug(
@ -579,7 +589,10 @@ class Exchange(object):
def refresh_latest_ohlcv(self, pair_list: List[Tuple[str, str]]) -> List[Tuple[str, List]]: def refresh_latest_ohlcv(self, pair_list: List[Tuple[str, str]]) -> List[Tuple[str, List]]:
""" """
Refresh in-memory ohlcv asyncronously and set `_klines` with the result Refresh in-memory ohlcv asynchronously and set `_klines` with the result
Loops asynchronously over pair_list and downloads all pairs async (semi-parallel).
:param pair_list: List of 2 element tuples containing pair, interval to refresh
:return: Returns a List of ticker-dataframes.
""" """
logger.debug("Refreshing ohlcv data for %d pairs", len(pair_list)) logger.debug("Refreshing ohlcv data for %d pairs", len(pair_list))
@ -627,7 +640,7 @@ class Exchange(object):
async def _async_get_candle_history(self, pair: str, ticker_interval: str, async def _async_get_candle_history(self, pair: str, ticker_interval: str,
since_ms: Optional[int] = None) -> Tuple[str, str, List]: since_ms: Optional[int] = None) -> Tuple[str, str, List]:
""" """
Asyncronously gets candle histories using fetch_ohlcv Asynchronously gets candle histories using fetch_ohlcv
returns tuple: (pair, ticker_interval, ohlcv_list) returns tuple: (pair, ticker_interval, ohlcv_list)
""" """
try: try:
@ -755,7 +768,11 @@ class Exchange(object):
def is_exchange_bad(exchange: str) -> bool: def is_exchange_bad(exchange: str) -> bool:
return exchange in ['bitmex', 'bitstamp'] return exchange in BAD_EXCHANGES
def get_exchange_bad_reason(exchange: str) -> str:
return BAD_EXCHANGES.get(exchange, "")
def is_exchange_available(exchange: str, ccxt_module=None) -> bool: def is_exchange_available(exchange: str, ccxt_module=None) -> bool:
@ -781,13 +798,45 @@ def timeframe_to_seconds(ticker_interval: str) -> int:
def timeframe_to_minutes(ticker_interval: str) -> int: def timeframe_to_minutes(ticker_interval: str) -> int:
""" """
Same as above, but returns minutes. Same as timeframe_to_seconds, but returns minutes.
""" """
return ccxt.Exchange.parse_timeframe(ticker_interval) // 60 return ccxt.Exchange.parse_timeframe(ticker_interval) // 60
def timeframe_to_msecs(ticker_interval: str) -> int: def timeframe_to_msecs(ticker_interval: str) -> int:
""" """
Same as above, but returns milliseconds. Same as timeframe_to_seconds, but returns milliseconds.
""" """
return ccxt.Exchange.parse_timeframe(ticker_interval) * 1000 return ccxt.Exchange.parse_timeframe(ticker_interval) * 1000
def timeframe_to_prev_date(timeframe: str, date: datetime = None) -> datetime:
"""
Use Timeframe and determine last possible candle.
:param timeframe: timeframe in string format (e.g. "5m")
:param date: date to use. Defaults to utcnow()
:returns: date of previous candle (with utc timezone)
"""
if not date:
date = datetime.now(timezone.utc)
timeframe_secs = timeframe_to_seconds(timeframe)
# Get offset based on timerame_secs
offset = date.timestamp() % timeframe_secs
# Subtract seconds passed since last offset
new_timestamp = date.timestamp() - offset
return datetime.fromtimestamp(new_timestamp, tz=timezone.utc)
def timeframe_to_next_date(timeframe: str, date: datetime = None) -> datetime:
"""
Use Timeframe and determine next candle.
:param timeframe: timeframe in string format (e.g. "5m")
:param date: date to use. Defaults to utcnow()
:returns: date of next candle (with utc timezone)
"""
prevdate = timeframe_to_prev_date(timeframe, date)
timeframe_secs = timeframe_to_seconds(timeframe)
# Add one interval to previous candle
new_timestamp = prevdate.timestamp() + timeframe_secs
return datetime.fromtimestamp(new_timestamp, tz=timezone.utc)

View File

@ -16,11 +16,11 @@ from freqtrade import (DependencyException, OperationalException, InvalidOrderEx
from freqtrade.data.converter import order_book_to_dataframe from freqtrade.data.converter import order_book_to_dataframe
from freqtrade.data.dataprovider import DataProvider from freqtrade.data.dataprovider import DataProvider
from freqtrade.edge import Edge from freqtrade.edge import Edge
from freqtrade.exchange import timeframe_to_minutes from freqtrade.exchange import timeframe_to_minutes, timeframe_to_next_date
from freqtrade.persistence import Trade from freqtrade.persistence import Trade
from freqtrade.rpc import RPCManager, RPCMessageType from freqtrade.rpc import RPCManager, RPCMessageType
from freqtrade.resolvers import ExchangeResolver, StrategyResolver, PairListResolver from freqtrade.resolvers import ExchangeResolver, StrategyResolver, PairListResolver
from freqtrade.state import State from freqtrade.state import State, RunMode
from freqtrade.strategy.interface import SellType, IStrategy from freqtrade.strategy.interface import SellType, IStrategy
from freqtrade.wallets import Wallets from freqtrade.wallets import Wallets
@ -75,6 +75,12 @@ class FreqtradeBot(object):
persistence.init(self.config.get('db_url', None), persistence.init(self.config.get('db_url', None),
clean_open_orders=self.config.get('dry_run', False)) clean_open_orders=self.config.get('dry_run', False))
# Stoploss on exchange does not make sense, therefore we need to disable that.
if (self.dataprovider.runmode == RunMode.DRY_RUN and
self.strategy.order_types.get('stoploss_on_exchange', False)):
logger.info("Disabling stoploss_on_exchange during dry-run.")
self.strategy.order_types['stoploss_on_exchange'] = False
config['order_types']['stoploss_on_exchange'] = False
# Set initial bot state from config # Set initial bot state from config
initial_state = self.config.get('initial_state') initial_state = self.config.get('initial_state')
self.state = State[initial_state.upper()] if initial_state else State.STOPPED self.state = State[initial_state.upper()] if initial_state else State.STOPPED
@ -99,13 +105,12 @@ class FreqtradeBot(object):
# Adjust stoploss if it was changed # Adjust stoploss if it was changed
Trade.stoploss_reinitialization(self.strategy.stoploss) Trade.stoploss_reinitialization(self.strategy.stoploss)
def process(self) -> bool: def process(self) -> None:
""" """
Queries the persistence layer for open trades and handles them, Queries the persistence layer for open trades and handles them,
otherwise a new trade is created. otherwise a new trade is created.
:return: True if one or more trades has been created or closed, False otherwise :return: True if one or more trades has been created or closed, False otherwise
""" """
state_changed = False
# Check whether markets have to be reloaded # Check whether markets have to be reloaded
self.exchange._reload_markets() self.exchange._reload_markets()
@ -132,19 +137,17 @@ class FreqtradeBot(object):
# First process current opened trades # First process current opened trades
for trade in trades: for trade in trades:
state_changed |= self.process_maybe_execute_sell(trade) self.process_maybe_execute_sell(trade)
# Then looking for buy opportunities # Then looking for buy opportunities
if len(trades) < self.config['max_open_trades']: if len(trades) < self.config['max_open_trades']:
state_changed = self.process_maybe_execute_buy() self.process_maybe_execute_buy()
if 'unfilledtimeout' in self.config: if 'unfilledtimeout' in self.config:
# Check and handle any timed out open orders # Check and handle any timed out open orders
self.check_handle_timedout() self.check_handle_timedout()
Trade.session.flush() Trade.session.flush()
return state_changed
def _extend_whitelist_with_trades(self, whitelist: List[str], trades: List[Any]): def _extend_whitelist_with_trades(self, whitelist: List[str], trades: List[Any]):
""" """
Extend whitelist with pairs from open trades Extend whitelist with pairs from open trades
@ -253,11 +256,12 @@ class FreqtradeBot(object):
amount_reserve_percent = max(amount_reserve_percent, 0.5) amount_reserve_percent = max(amount_reserve_percent, 0.5)
return min(min_stake_amounts) / amount_reserve_percent return min(min_stake_amounts) / amount_reserve_percent
def create_trade(self) -> bool: def create_trades(self) -> bool:
""" """
Checks the implemented trading indicator(s) for a randomly picked pair, Checks the implemented trading strategy for buy-signals, using the active pair whitelist.
if one pair triggers the buy_signal a new trade record gets created If a pair triggers the buy_signal a new trade record gets created.
:return: True if a trade object has been created and persisted, False otherwise Checks pairs as long as the open trade count is below `max_open_trades`.
:return: True if at least one trade has been created.
""" """
interval = self.strategy.ticker_interval interval = self.strategy.ticker_interval
whitelist = copy.deepcopy(self.active_pair_whitelist) whitelist = copy.deepcopy(self.active_pair_whitelist)
@ -276,15 +280,19 @@ class FreqtradeBot(object):
logger.info("No currency pair in whitelist, but checking to sell open trades.") logger.info("No currency pair in whitelist, but checking to sell open trades.")
return False return False
buycount = 0
# running get_signal on historical data fetched # running get_signal on historical data fetched
for _pair in whitelist: for _pair in whitelist:
if self.strategy.is_pair_locked(_pair):
logger.info(f"Pair {_pair} is currently locked.")
continue
(buy, sell) = self.strategy.get_signal( (buy, sell) = self.strategy.get_signal(
_pair, interval, self.dataprovider.ohlcv(_pair, self.strategy.ticker_interval)) _pair, interval, self.dataprovider.ohlcv(_pair, self.strategy.ticker_interval))
if buy and not sell: if buy and not sell and len(Trade.get_open_trades()) < self.config['max_open_trades']:
stake_amount = self._get_trade_stake_amount(_pair) stake_amount = self._get_trade_stake_amount(_pair)
if not stake_amount: if not stake_amount:
return False continue
logger.info(f"Buy signal found: about create a new trade with stake_amount: " logger.info(f"Buy signal found: about create a new trade with stake_amount: "
f"{stake_amount} ...") f"{stake_amount} ...")
@ -294,12 +302,13 @@ class FreqtradeBot(object):
if (bidstrat_check_depth_of_market.get('enabled', False)) and\ if (bidstrat_check_depth_of_market.get('enabled', False)) and\
(bidstrat_check_depth_of_market.get('bids_to_ask_delta', 0) > 0): (bidstrat_check_depth_of_market.get('bids_to_ask_delta', 0) > 0):
if self._check_depth_of_market_buy(_pair, bidstrat_check_depth_of_market): if self._check_depth_of_market_buy(_pair, bidstrat_check_depth_of_market):
return self.execute_buy(_pair, stake_amount) buycount += self.execute_buy(_pair, stake_amount)
else: else:
return False continue
return self.execute_buy(_pair, stake_amount)
return False buycount += self.execute_buy(_pair, stake_amount)
return buycount > 0
def _check_depth_of_market_buy(self, pair: str, conf: Dict) -> bool: def _check_depth_of_market_buy(self, pair: str, conf: Dict) -> bool:
""" """
@ -423,21 +432,17 @@ class FreqtradeBot(object):
return True return True
def process_maybe_execute_buy(self) -> bool: def process_maybe_execute_buy(self) -> None:
""" """
Tries to execute a buy trade in a safe way Tries to execute a buy trade in a safe way
:return: True if executed :return: True if executed
""" """
try: try:
# Create entity and execute trade # Create entity and execute trade
if self.create_trade(): if not self.create_trades():
return True logger.info('Found no buy signals for whitelisted currencies. Trying again...')
logger.info('Found no buy signals for whitelisted currencies. Trying again..')
return False
except DependencyException as exception: except DependencyException as exception:
logger.warning('Unable to create trade: %s', exception) logger.warning('Unable to create trade: %s', exception)
return False
def process_maybe_execute_sell(self, trade: Trade) -> bool: def process_maybe_execute_sell(self, trade: Trade) -> bool:
""" """
@ -672,6 +677,9 @@ class FreqtradeBot(object):
if stoploss_order and stoploss_order['status'] == 'closed': if stoploss_order and stoploss_order['status'] == 'closed':
trade.sell_reason = SellType.STOPLOSS_ON_EXCHANGE.value trade.sell_reason = SellType.STOPLOSS_ON_EXCHANGE.value
trade.update(stoploss_order) trade.update(stoploss_order)
# Lock pair for one candle to prevent immediate rebuys
self.strategy.lock_pair(trade.pair,
timeframe_to_next_date(self.config['ticker_interval']))
self._notify_sell(trade) self._notify_sell(trade)
return True return True
@ -869,16 +877,23 @@ class FreqtradeBot(object):
logger.exception(f"Could not cancel stoploss order {trade.stoploss_order_id}") logger.exception(f"Could not cancel stoploss order {trade.stoploss_order_id}")
# Execute sell and update trade record # Execute sell and update trade record
order_id = self.exchange.sell(pair=str(trade.pair), order = self.exchange.sell(pair=str(trade.pair),
ordertype=self.strategy.order_types[sell_type], ordertype=self.strategy.order_types[sell_type],
amount=trade.amount, rate=limit, amount=trade.amount, rate=limit,
time_in_force=self.strategy.order_time_in_force['sell'] time_in_force=self.strategy.order_time_in_force['sell']
)['id'] )
trade.open_order_id = order_id trade.open_order_id = order['id']
trade.close_rate_requested = limit trade.close_rate_requested = limit
trade.sell_reason = sell_reason.value trade.sell_reason = sell_reason.value
# In case of market sell orders the order can be closed immediately
if order.get('status', 'unknown') == 'closed':
trade.update(order)
Trade.session.flush() Trade.session.flush()
# Lock pair for one candle to prevent immediate rebuys
self.strategy.lock_pair(trade.pair, timeframe_to_next_date(self.config['ticker_interval']))
self._notify_sell(trade) self._notify_sell(trade)
def _notify_sell(self, trade: Trade): def _notify_sell(self, trade: Trade):

View File

@ -12,7 +12,7 @@ from typing import Any, Dict, List, NamedTuple, Optional
from pandas import DataFrame from pandas import DataFrame
from freqtrade import OperationalException from freqtrade import OperationalException
from freqtrade.configuration import Arguments from freqtrade.configuration import TimeRange
from freqtrade.data import history from freqtrade.data import history
from freqtrade.data.dataprovider import DataProvider from freqtrade.data.dataprovider import DataProvider
from freqtrade.exchange import timeframe_to_minutes from freqtrade.exchange import timeframe_to_minutes
@ -404,7 +404,7 @@ class Backtesting(object):
logger.info('Using stake_currency: %s ...', self.config['stake_currency']) logger.info('Using stake_currency: %s ...', self.config['stake_currency'])
logger.info('Using stake_amount: %s ...', self.config['stake_amount']) logger.info('Using stake_amount: %s ...', self.config['stake_amount'])
timerange = Arguments.parse_timerange(None if self.config.get( timerange = TimeRange.parse_timerange(None if self.config.get(
'timerange') is None else str(self.config.get('timerange'))) 'timerange') is None else str(self.config.get('timerange')))
data = history.load_data( data = history.load_data(
datadir=Path(self.config['datadir']) if self.config.get('datadir') else None, datadir=Path(self.config['datadir']) if self.config.get('datadir') else None,

View File

@ -14,36 +14,48 @@ from freqtrade.optimize.hyperopt_interface import IHyperOpt
class DefaultHyperOpts(IHyperOpt): class DefaultHyperOpts(IHyperOpt):
""" """
Default hyperopt provided by the Freqtrade bot. Default hyperopt provided by the Freqtrade bot.
You can override it with your own hyperopt You can override it with your own Hyperopt
""" """
@staticmethod @staticmethod
def populate_indicators(dataframe: DataFrame, metadata: dict) -> DataFrame: def populate_indicators(dataframe: DataFrame, metadata: dict) -> DataFrame:
"""
Add several indicators needed for buy and sell strategies defined below.
"""
# ADX
dataframe['adx'] = ta.ADX(dataframe) dataframe['adx'] = ta.ADX(dataframe)
# MACD
macd = ta.MACD(dataframe) macd = ta.MACD(dataframe)
dataframe['macd'] = macd['macd'] dataframe['macd'] = macd['macd']
dataframe['macdsignal'] = macd['macdsignal'] dataframe['macdsignal'] = macd['macdsignal']
# MFI
dataframe['mfi'] = ta.MFI(dataframe) dataframe['mfi'] = ta.MFI(dataframe)
# RSI
dataframe['rsi'] = ta.RSI(dataframe) dataframe['rsi'] = ta.RSI(dataframe)
# Stochastic Fast
stoch_fast = ta.STOCHF(dataframe) stoch_fast = ta.STOCHF(dataframe)
dataframe['fastd'] = stoch_fast['fastd'] dataframe['fastd'] = stoch_fast['fastd']
# Minus-DI
dataframe['minus_di'] = ta.MINUS_DI(dataframe) dataframe['minus_di'] = ta.MINUS_DI(dataframe)
# Bollinger bands # Bollinger bands
bollinger = qtpylib.bollinger_bands(qtpylib.typical_price(dataframe), window=20, stds=2) bollinger = qtpylib.bollinger_bands(qtpylib.typical_price(dataframe), window=20, stds=2)
dataframe['bb_lowerband'] = bollinger['lower'] dataframe['bb_lowerband'] = bollinger['lower']
dataframe['bb_upperband'] = bollinger['upper'] dataframe['bb_upperband'] = bollinger['upper']
# SAR
dataframe['sar'] = ta.SAR(dataframe) dataframe['sar'] = ta.SAR(dataframe)
return dataframe return dataframe
@staticmethod @staticmethod
def buy_strategy_generator(params: Dict[str, Any]) -> Callable: def buy_strategy_generator(params: Dict[str, Any]) -> Callable:
""" """
Define the buy strategy parameters to be used by hyperopt Define the buy strategy parameters to be used by Hyperopt.
""" """
def populate_buy_trend(dataframe: DataFrame, metadata: dict) -> DataFrame: def populate_buy_trend(dataframe: DataFrame, metadata: dict) -> DataFrame:
""" """
Buy strategy Hyperopt will build and use Buy strategy Hyperopt will build and use.
""" """
conditions = [] conditions = []
# GUARDS AND TRENDS # GUARDS AND TRENDS
if 'mfi-enabled' in params and params['mfi-enabled']: if 'mfi-enabled' in params and params['mfi-enabled']:
conditions.append(dataframe['mfi'] < params['mfi-value']) conditions.append(dataframe['mfi'] < params['mfi-value'])
@ -79,7 +91,7 @@ class DefaultHyperOpts(IHyperOpt):
@staticmethod @staticmethod
def indicator_space() -> List[Dimension]: def indicator_space() -> List[Dimension]:
""" """
Define your Hyperopt space for searching strategy parameters Define your Hyperopt space for searching buy strategy parameters.
""" """
return [ return [
Integer(10, 25, name='mfi-value'), Integer(10, 25, name='mfi-value'),
@ -96,14 +108,14 @@ class DefaultHyperOpts(IHyperOpt):
@staticmethod @staticmethod
def sell_strategy_generator(params: Dict[str, Any]) -> Callable: def sell_strategy_generator(params: Dict[str, Any]) -> Callable:
""" """
Define the sell strategy parameters to be used by hyperopt Define the sell strategy parameters to be used by Hyperopt.
""" """
def populate_sell_trend(dataframe: DataFrame, metadata: dict) -> DataFrame: def populate_sell_trend(dataframe: DataFrame, metadata: dict) -> DataFrame:
""" """
Sell strategy Hyperopt will build and use Sell strategy Hyperopt will build and use.
""" """
# print(params)
conditions = [] conditions = []
# GUARDS AND TRENDS # GUARDS AND TRENDS
if 'sell-mfi-enabled' in params and params['sell-mfi-enabled']: if 'sell-mfi-enabled' in params and params['sell-mfi-enabled']:
conditions.append(dataframe['mfi'] > params['sell-mfi-value']) conditions.append(dataframe['mfi'] > params['sell-mfi-value'])
@ -139,7 +151,7 @@ class DefaultHyperOpts(IHyperOpt):
@staticmethod @staticmethod
def sell_indicator_space() -> List[Dimension]: def sell_indicator_space() -> List[Dimension]:
""" """
Define your Hyperopt space for searching sell strategy parameters Define your Hyperopt space for searching sell strategy parameters.
""" """
return [ return [
Integer(75, 100, name='sell-mfi-value'), Integer(75, 100, name='sell-mfi-value'),
@ -157,9 +169,9 @@ class DefaultHyperOpts(IHyperOpt):
def populate_buy_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame: def populate_buy_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
""" """
Based on TA indicators. Should be a copy of from strategy Based on TA indicators. Should be a copy of same method from strategy.
must align to populate_indicators in this file Must align to populate_indicators in this file.
Only used when --spaces does not include buy Only used when --spaces does not include buy space.
""" """
dataframe.loc[ dataframe.loc[
( (
@ -174,9 +186,9 @@ class DefaultHyperOpts(IHyperOpt):
def populate_sell_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame: def populate_sell_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
""" """
Based on TA indicators. Should be a copy of from strategy Based on TA indicators. Should be a copy of same method from strategy.
must align to populate_indicators in this file Must align to populate_indicators in this file.
Only used when --spaces does not include sell Only used when --spaces does not include sell space.
""" """
dataframe.loc[ dataframe.loc[
( (
@ -186,4 +198,5 @@ class DefaultHyperOpts(IHyperOpt):
(dataframe['fastd'] > 54) (dataframe['fastd'] > 54)
), ),
'sell'] = 1 'sell'] = 1
return dataframe return dataframe

View File

@ -9,7 +9,7 @@ from tabulate import tabulate
from freqtrade import constants from freqtrade import constants
from freqtrade.edge import Edge from freqtrade.edge import Edge
from freqtrade.configuration import Arguments from freqtrade.configuration import TimeRange
from freqtrade.exchange import Exchange from freqtrade.exchange import Exchange
from freqtrade.resolvers import StrategyResolver from freqtrade.resolvers import StrategyResolver
@ -41,7 +41,7 @@ class EdgeCli(object):
self.edge = Edge(config, self.exchange, self.strategy) self.edge = Edge(config, self.exchange, self.strategy)
self.edge._refresh_pairs = self.config.get('refresh_pairs', False) self.edge._refresh_pairs = self.config.get('refresh_pairs', False)
self.timerange = Arguments.parse_timerange(None if self.config.get( self.timerange = TimeRange.parse_timerange(None if self.config.get(
'timerange') is None else str(self.config.get('timerange'))) 'timerange') is None else str(self.config.get('timerange')))
self.edge._timerange = self.timerange self.edge._timerange = self.timerange

View File

@ -7,17 +7,22 @@ This module contains the hyperopt logic
import logging import logging
import sys import sys
from collections import OrderedDict
from operator import itemgetter from operator import itemgetter
from pathlib import Path from pathlib import Path
from pprint import pprint from pprint import pprint
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional
import rapidjson
from colorama import init as colorama_init
from colorama import Fore, Style
from joblib import Parallel, delayed, dump, load, wrap_non_picklable_objects, cpu_count from joblib import Parallel, delayed, dump, load, wrap_non_picklable_objects, cpu_count
from pandas import DataFrame from pandas import DataFrame
from skopt import Optimizer from skopt import Optimizer
from skopt.space import Dimension from skopt.space import Dimension
from freqtrade.configuration import Arguments from freqtrade.configuration import TimeRange
from freqtrade.data.history import load_data, get_timeframe from freqtrade.data.history import load_data, get_timeframe
from freqtrade.optimize.backtesting import Backtesting from freqtrade.optimize.backtesting import Backtesting
# Import IHyperOptLoss to allow users import from this file # Import IHyperOptLoss to allow users import from this file
@ -136,30 +141,61 @@ class Hyperopt(Backtesting):
results = sorted(self.trials, key=itemgetter('loss')) results = sorted(self.trials, key=itemgetter('loss'))
best_result = results[0] best_result = results[0]
params = best_result['params'] params = best_result['params']
log_str = self.format_results_logstring(best_result) log_str = self.format_results_logstring(best_result)
print(f"\nBest result:\n\n{log_str}\n") print(f"\nBest result:\n\n{log_str}\n")
if self.has_space('buy'):
print('Buy hyperspace params:') if self.config.get('print_json'):
pprint({p.name: params.get(p.name) for p in self.hyperopt_space('buy')}, result_dict: Dict = {}
indent=4) if self.has_space('buy') or self.has_space('sell'):
if self.has_space('sell'): result_dict['params'] = {}
print('Sell hyperspace params:') if self.has_space('buy'):
pprint({p.name: params.get(p.name) for p in self.hyperopt_space('sell')}, result_dict['params'].update({p.name: params.get(p.name)
indent=4) for p in self.hyperopt_space('buy')})
if self.has_space('roi'): if self.has_space('sell'):
print("ROI table:") result_dict['params'].update({p.name: params.get(p.name)
pprint(self.custom_hyperopt.generate_roi_table(params), indent=4) for p in self.hyperopt_space('sell')})
if self.has_space('stoploss'): if self.has_space('roi'):
print(f"Stoploss: {params.get('stoploss')}") # Convert keys in min_roi dict to strings because
# rapidjson cannot dump dicts with integer keys...
# OrderedDict is used to keep the numeric order of the items
# in the dict.
result_dict['minimal_roi'] = OrderedDict(
(str(k), v) for k, v in self.custom_hyperopt.generate_roi_table(params).items()
)
if self.has_space('stoploss'):
result_dict['stoploss'] = params.get('stoploss')
print(rapidjson.dumps(result_dict, default=str, number_mode=rapidjson.NM_NATIVE))
else:
if self.has_space('buy'):
print('Buy hyperspace params:')
pprint({p.name: params.get(p.name) for p in self.hyperopt_space('buy')},
indent=4)
if self.has_space('sell'):
print('Sell hyperspace params:')
pprint({p.name: params.get(p.name) for p in self.hyperopt_space('sell')},
indent=4)
if self.has_space('roi'):
print("ROI table:")
pprint(self.custom_hyperopt.generate_roi_table(params), indent=4)
if self.has_space('stoploss'):
print(f"Stoploss: {params.get('stoploss')}")
def log_results(self, results) -> None: def log_results(self, results) -> None:
""" """
Log results if it is better than any previous evaluation Log results if it is better than any previous evaluation
""" """
print_all = self.config.get('print_all', False) print_all = self.config.get('print_all', False)
if print_all or results['loss'] < self.current_best_loss: is_best_loss = results['loss'] < self.current_best_loss
if print_all or is_best_loss:
if is_best_loss:
self.current_best_loss = results['loss']
log_str = self.format_results_logstring(results) log_str = self.format_results_logstring(results)
# Colorize output
if self.config.get('print_colorized', False):
if results['total_profit'] > 0:
log_str = Fore.GREEN + log_str
if print_all and is_best_loss:
log_str = Style.BRIGHT + log_str
if print_all: if print_all:
print(log_str) print(log_str)
else: else:
@ -174,7 +210,6 @@ class Hyperopt(Backtesting):
total = self.total_epochs total = self.total_epochs
res = results['results_explanation'] res = results['results_explanation']
loss = results['loss'] loss = results['loss']
self.current_best_loss = results['loss']
log_str = f'{current:5d}/{total}: {res} Objective: {loss:.5f}' log_str = f'{current:5d}/{total}: {res} Objective: {loss:.5f}'
log_str = f'*{log_str}' if results['is_initial_point'] else f' {log_str}' log_str = f'*{log_str}' if results['is_initial_point'] else f' {log_str}'
return log_str return log_str
@ -242,6 +277,7 @@ class Hyperopt(Backtesting):
results_explanation = self.format_results(results) results_explanation = self.format_results(results)
trade_count = len(results.index) trade_count = len(results.index)
total_profit = results.profit_abs.sum()
# If this evaluation contains too short amount of trades to be # If this evaluation contains too short amount of trades to be
# interesting -- consider it as 'bad' (assigned max. loss value) # interesting -- consider it as 'bad' (assigned max. loss value)
@ -252,6 +288,7 @@ class Hyperopt(Backtesting):
'loss': MAX_LOSS, 'loss': MAX_LOSS,
'params': params, 'params': params,
'results_explanation': results_explanation, 'results_explanation': results_explanation,
'total_profit': total_profit,
} }
loss = self.calculate_loss(results=results, trade_count=trade_count, loss = self.calculate_loss(results=results, trade_count=trade_count,
@ -261,6 +298,7 @@ class Hyperopt(Backtesting):
'loss': loss, 'loss': loss,
'params': params, 'params': params,
'results_explanation': results_explanation, 'results_explanation': results_explanation,
'total_profit': total_profit,
} }
def format_results(self, results: DataFrame) -> str: def format_results(self, results: DataFrame) -> str:
@ -302,7 +340,7 @@ class Hyperopt(Backtesting):
) )
def start(self) -> None: def start(self) -> None:
timerange = Arguments.parse_timerange(None if self.config.get( timerange = TimeRange.parse_timerange(None if self.config.get(
'timerange') is None else str(self.config.get('timerange'))) 'timerange') is None else str(self.config.get('timerange')))
data = load_data( data = load_data(
datadir=Path(self.config['datadir']) if self.config.get('datadir') else None, datadir=Path(self.config['datadir']) if self.config.get('datadir') else None,
@ -344,6 +382,10 @@ class Hyperopt(Backtesting):
logger.info(f'Number of parallel jobs set as: {config_jobs}') logger.info(f'Number of parallel jobs set as: {config_jobs}')
opt = self.get_optimizer(config_jobs) opt = self.get_optimizer(config_jobs)
if self.config.get('print_colorized', False):
colorama_init(autoreset=True)
try: try:
with Parallel(n_jobs=config_jobs) as parallel: with Parallel(n_jobs=config_jobs) as parallel:
jobs = parallel._effective_n_jobs() jobs = parallel._effective_n_jobs()

View File

@ -4,7 +4,7 @@ from typing import Dict, List, Optional
import pandas as pd import pandas as pd
from freqtrade.configuration import Arguments from freqtrade.configuration import TimeRange
from freqtrade.data import history from freqtrade.data import history
from freqtrade.data.btanalysis import (combine_tickers_with_mean, from freqtrade.data.btanalysis import (combine_tickers_with_mean,
create_cum_profit, load_trades) create_cum_profit, load_trades)
@ -42,7 +42,7 @@ def init_plotscript(config):
pairs = config["exchange"]["pair_whitelist"] pairs = config["exchange"]["pair_whitelist"]
# Set timerange to use # Set timerange to use
timerange = Arguments.parse_timerange(config.get("timerange")) timerange = TimeRange.parse_timerange(config.get("timerange"))
tickers = history.load_data( tickers = history.load_data(
datadir=Path(str(config.get("datadir"))), datadir=Path(str(config.get("datadir"))),

View File

@ -10,7 +10,7 @@ from typing import Dict, Any, List, Optional
import arrow import arrow
import sqlalchemy as sql import sqlalchemy as sql
from numpy import mean, nan_to_num, NAN from numpy import mean, NAN
from pandas import DataFrame from pandas import DataFrame
from freqtrade import TemporaryError, DependencyException from freqtrade import TemporaryError, DependencyException
@ -195,9 +195,9 @@ class RPC(object):
trades = Trade.query.order_by(Trade.id).all() trades = Trade.query.order_by(Trade.id).all()
profit_all_coin = [] profit_all_coin = []
profit_all_percent = [] profit_all_perc = []
profit_closed_coin = [] profit_closed_coin = []
profit_closed_percent = [] profit_closed_perc = []
durations = [] durations = []
for trade in trades: for trade in trades:
@ -211,7 +211,7 @@ class RPC(object):
if not trade.is_open: if not trade.is_open:
profit_percent = trade.calc_profit_percent() profit_percent = trade.calc_profit_percent()
profit_closed_coin.append(trade.calc_profit()) profit_closed_coin.append(trade.calc_profit())
profit_closed_percent.append(profit_percent) profit_closed_perc.append(profit_percent)
else: else:
# Get current rate # Get current rate
try: try:
@ -223,7 +223,7 @@ class RPC(object):
profit_all_coin.append( profit_all_coin.append(
trade.calc_profit(rate=Decimal(trade.close_rate or current_rate)) trade.calc_profit(rate=Decimal(trade.close_rate or current_rate))
) )
profit_all_percent.append(profit_percent) profit_all_perc.append(profit_percent)
best_pair = Trade.session.query( best_pair = Trade.session.query(
Trade.pair, sql.func.sum(Trade.close_profit).label('profit_sum') Trade.pair, sql.func.sum(Trade.close_profit).label('profit_sum')
@ -238,7 +238,8 @@ class RPC(object):
# Prepare data to display # Prepare data to display
profit_closed_coin_sum = round(sum(profit_closed_coin), 8) profit_closed_coin_sum = round(sum(profit_closed_coin), 8)
profit_closed_percent = round(nan_to_num(mean(profit_closed_percent)) * 100, 2) profit_closed_percent = (round(mean(profit_closed_perc) * 100, 2) if profit_closed_perc
else 0.0)
profit_closed_fiat = self._fiat_converter.convert_amount( profit_closed_fiat = self._fiat_converter.convert_amount(
profit_closed_coin_sum, profit_closed_coin_sum,
stake_currency, stake_currency,
@ -246,7 +247,7 @@ class RPC(object):
) if self._fiat_converter else 0 ) if self._fiat_converter else 0
profit_all_coin_sum = round(sum(profit_all_coin), 8) profit_all_coin_sum = round(sum(profit_all_coin), 8)
profit_all_percent = round(nan_to_num(mean(profit_all_percent)) * 100, 2) profit_all_percent = round(mean(profit_all_perc) * 100, 2) if profit_all_perc else 0.0
profit_all_fiat = self._fiat_converter.convert_amount( profit_all_fiat = self._fiat_converter.convert_amount(
profit_all_coin_sum, profit_all_coin_sum,
stake_currency, stake_currency,

View File

@ -4,7 +4,7 @@ This module defines the interface to apply for strategies
""" """
import logging import logging
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from datetime import datetime from datetime import datetime, timezone
from enum import Enum from enum import Enum
from typing import Dict, List, NamedTuple, Optional, Tuple from typing import Dict, List, NamedTuple, Optional, Tuple
import warnings import warnings
@ -107,6 +107,7 @@ class IStrategy(ABC):
self.config = config self.config = config
# Dict to determine if analysis is necessary # Dict to determine if analysis is necessary
self._last_candle_seen_per_pair: Dict[str, datetime] = {} self._last_candle_seen_per_pair: Dict[str, datetime] = {}
self._pair_locked_until: Dict[str, datetime] = {}
@abstractmethod @abstractmethod
def populate_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame: def populate_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
@ -154,6 +155,24 @@ class IStrategy(ABC):
""" """
return self.__class__.__name__ return self.__class__.__name__
def lock_pair(self, pair: str, until: datetime) -> None:
"""
Locks pair until a given timestamp happens.
Locked pairs are not analyzed, and are prevented from opening new trades.
:param pair: Pair to lock
:param until: datetime in UTC until the pair should be blocked from opening new trades.
Needs to be timezone aware `datetime.now(timezone.utc)`
"""
self._pair_locked_until[pair] = until
def is_pair_locked(self, pair: str) -> bool:
"""
Checks if a pair is currently locked
"""
if pair not in self._pair_locked_until:
return False
return self._pair_locked_until[pair] >= datetime.now(timezone.utc)
def analyze_ticker(self, dataframe: DataFrame, metadata: dict) -> DataFrame: def analyze_ticker(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
""" """
Parses the given ticker history and returns a populated DataFrame Parses the given ticker history and returns a populated DataFrame
@ -260,8 +279,8 @@ class IStrategy(ABC):
sell: bool, low: float = None, high: float = None, sell: bool, low: float = None, high: float = None,
force_stoploss: float = 0) -> SellCheckTuple: force_stoploss: float = 0) -> SellCheckTuple:
""" """
This function evaluate if on the condition required to trigger a sell has been reached This function evaluates if one of the conditions required to trigger a sell
if the threshold is reached and updates the trade record. has been reached, which can either be a stop-loss, ROI or sell-signal.
:param low: Only used during backtesting to simulate stoploss :param low: Only used during backtesting to simulate stoploss
:param high: Only used during backtesting, to simulate ROI :param high: Only used during backtesting, to simulate ROI
:param force_stoploss: Externally provided stoploss :param force_stoploss: Externally provided stoploss

View File

@ -0,0 +1,133 @@
{
/* Single-line C-style comment */
"max_open_trades": 3,
/*
* Multi-line C-style comment
*/
"stake_currency": "BTC",
"stake_amount": 0.05,
"fiat_display_currency": "USD", // C++-style comment
"amount_reserve_percent" : 0.05, // And more, tabs before this comment
"dry_run": false,
"ticker_interval": "5m",
"trailing_stop": false,
"trailing_stop_positive": 0.005,
"trailing_stop_positive_offset": 0.0051,
"trailing_only_offset_is_reached": false,
"minimal_roi": {
"40": 0.0,
"30": 0.01,
"20": 0.02,
"0": 0.04
},
"stoploss": -0.10,
"unfilledtimeout": {
"buy": 10,
"sell": 30, // Trailing comma should also be accepted now
},
"bid_strategy": {
"use_order_book": false,
"ask_last_balance": 0.0,
"order_book_top": 1,
"check_depth_of_market": {
"enabled": false,
"bids_to_ask_delta": 1
}
},
"ask_strategy":{
"use_order_book": false,
"order_book_min": 1,
"order_book_max": 9
},
"order_types": {
"buy": "limit",
"sell": "limit",
"stoploss": "market",
"stoploss_on_exchange": false,
"stoploss_on_exchange_interval": 60
},
"order_time_in_force": {
"buy": "gtc",
"sell": "gtc"
},
"pairlist": {
"method": "VolumePairList",
"config": {
"number_assets": 20,
"sort_key": "quoteVolume",
"precision_filter": false
}
},
"exchange": {
"name": "bittrex",
"sandbox": false,
"key": "your_exchange_key",
"secret": "your_exchange_secret",
"password": "",
"ccxt_config": {"enableRateLimit": true},
"ccxt_async_config": {
"enableRateLimit": false,
"rateLimit": 500,
"aiohttp_trust_env": false
},
"pair_whitelist": [
"ETH/BTC",
"LTC/BTC",
"ETC/BTC",
"DASH/BTC",
"ZEC/BTC",
"XLM/BTC",
"NXT/BTC",
"POWR/BTC",
"ADA/BTC",
"XMR/BTC"
],
"pair_blacklist": [
"DOGE/BTC"
],
"outdated_offset": 5,
"markets_refresh_interval": 60
},
"edge": {
"enabled": false,
"process_throttle_secs": 3600,
"calculate_since_number_of_days": 7,
"capital_available_percentage": 0.5,
"allowed_risk": 0.01,
"stoploss_range_min": -0.01,
"stoploss_range_max": -0.1,
"stoploss_range_step": -0.01,
"minimum_winrate": 0.60,
"minimum_expectancy": 0.20,
"min_trade_number": 10,
"max_trade_duration_minute": 1440,
"remove_pumps": false
},
"experimental": {
"use_sell_signal": false,
"sell_profit_only": false,
"ignore_roi_if_buy_signal": false
},
"telegram": {
// We can now comment out some settings
// "enabled": true,
"enabled": false,
"token": "your_telegram_token",
"chat_id": "your_telegram_chat_id"
},
"api_server": {
"enabled": false,
"listen_ip_address": "127.0.0.1",
"listen_port": 8080,
"username": "freqtrader",
"password": "SuperSecurePassword"
},
"db_url": "sqlite:///tradesv3.sqlite",
"initial_state": "running",
"forcebuy_enable": false,
"internals": {
"process_throttle_secs": 5
},
"strategy": "DefaultStrategy",
"strategy_path": "user_data/strategies/"
}

View File

@ -10,6 +10,7 @@ from unittest.mock import MagicMock, PropertyMock
import arrow import arrow
import pytest import pytest
import numpy as np
from telegram import Chat, Message, Update from telegram import Chat, Message, Update
from freqtrade import constants, persistence from freqtrade import constants, persistence
@ -25,17 +26,21 @@ from freqtrade.worker import Worker
logging.getLogger('').setLevel(logging.INFO) logging.getLogger('').setLevel(logging.INFO)
# Do not mask numpy errors as warnings that no one read, raise the exсeption
np.seterr(all='raise')
def log_has(line, logs): def log_has(line, logs):
# caplog mocker returns log as a tuple: ('freqtrade.something', logging.WARNING, 'foobar') # caplog mocker returns log as a tuple: ('freqtrade.something', logging.WARNING, 'foobar')
# and we want to match line against foobar in the tuple # and we want to match line against foobar in the tuple
return reduce(lambda a, b: a or b, return reduce(lambda a, b: a or b,
filter(lambda x: x[2] == line, logs), filter(lambda x: x[2] == line, logs.record_tuples),
False) False)
def log_has_re(line, logs): def log_has_re(line, logs):
return reduce(lambda a, b: a or b, return reduce(lambda a, b: a or b,
filter(lambda x: re.match(line, x[2]), logs), filter(lambda x: re.match(line, x[2]), logs.record_tuples),
False) False)

View File

@ -4,7 +4,7 @@ import pytest
from arrow import Arrow from arrow import Arrow
from pandas import DataFrame, to_datetime from pandas import DataFrame, to_datetime
from freqtrade.configuration import Arguments, TimeRange from freqtrade.configuration import TimeRange
from freqtrade.data.btanalysis import (BT_DATA_COLUMNS, from freqtrade.data.btanalysis import (BT_DATA_COLUMNS,
combine_tickers_with_mean, combine_tickers_with_mean,
create_cum_profit, create_cum_profit,
@ -121,7 +121,7 @@ def test_combine_tickers_with_mean():
def test_create_cum_profit(): def test_create_cum_profit():
filename = make_testdata_path(None) / "backtest-result_test.json" filename = make_testdata_path(None) / "backtest-result_test.json"
bt_data = load_backtest_data(filename) bt_data = load_backtest_data(filename)
timerange = Arguments.parse_timerange("20180110-20180112") timerange = TimeRange.parse_timerange("20180110-20180112")
df = load_pair_history(pair="POWR/BTC", ticker_interval='5m', df = load_pair_history(pair="POWR/BTC", ticker_interval='5m',
datadir=None, timerange=timerange) datadir=None, timerange=timerange)

View File

@ -18,7 +18,7 @@ def test_parse_ticker_dataframe(ticker_history_list, caplog):
dataframe = parse_ticker_dataframe(ticker_history_list, '5m', dataframe = parse_ticker_dataframe(ticker_history_list, '5m',
pair="UNITTEST/BTC", fill_missing=True) pair="UNITTEST/BTC", fill_missing=True)
assert dataframe.columns.tolist() == columns assert dataframe.columns.tolist() == columns
assert log_has('Parsing tickerlist to dataframe', caplog.record_tuples) assert log_has('Parsing tickerlist to dataframe', caplog)
def test_ohlcv_fill_up_missing_data(caplog): def test_ohlcv_fill_up_missing_data(caplog):
@ -34,8 +34,7 @@ def test_ohlcv_fill_up_missing_data(caplog):
assert (data.columns == data2.columns).all() assert (data.columns == data2.columns).all()
assert log_has(f"Missing data fillup for UNITTEST/BTC: before: " assert log_has(f"Missing data fillup for UNITTEST/BTC: before: "
f"{len(data)} - after: {len(data2)}", f"{len(data)} - after: {len(data2)}", caplog)
caplog.record_tuples)
# Test fillup actually fixes invalid backtest data # Test fillup actually fixes invalid backtest data
min_date, max_date = get_timeframe({'UNITTEST/BTC': data}) min_date, max_date = get_timeframe({'UNITTEST/BTC': data})
@ -97,8 +96,7 @@ def test_ohlcv_fill_up_missing_data2(caplog):
assert (data.columns == data2.columns).all() assert (data.columns == data2.columns).all()
assert log_has(f"Missing data fillup for UNITTEST/BTC: before: " assert log_has(f"Missing data fillup for UNITTEST/BTC: before: "
f"{len(data)} - after: {len(data2)}", f"{len(data)} - after: {len(data2)}", caplog)
caplog.record_tuples)
def test_ohlcv_drop_incomplete(caplog): def test_ohlcv_drop_incomplete(caplog):
@ -140,11 +138,11 @@ def test_ohlcv_drop_incomplete(caplog):
data = parse_ticker_dataframe(ticks, ticker_interval, pair="UNITTEST/BTC", data = parse_ticker_dataframe(ticks, ticker_interval, pair="UNITTEST/BTC",
fill_missing=False, drop_incomplete=False) fill_missing=False, drop_incomplete=False)
assert len(data) == 4 assert len(data) == 4
assert not log_has("Dropping last candle", caplog.record_tuples) assert not log_has("Dropping last candle", caplog)
# Drop last candle # Drop last candle
data = parse_ticker_dataframe(ticks, ticker_interval, pair="UNITTEST/BTC", data = parse_ticker_dataframe(ticks, ticker_interval, pair="UNITTEST/BTC",
fill_missing=False, drop_incomplete=True) fill_missing=False, drop_incomplete=True)
assert len(data) == 3 assert len(data) == 3
assert log_has("Dropping last candle", caplog.record_tuples) assert log_has("Dropping last candle", caplog)

View File

@ -13,6 +13,7 @@ def test_ohlcv(mocker, default_conf, ticker_history):
exchange = get_patched_exchange(mocker, default_conf) exchange = get_patched_exchange(mocker, default_conf)
exchange._klines[("XRP/BTC", ticker_interval)] = ticker_history exchange._klines[("XRP/BTC", ticker_interval)] = ticker_history
exchange._klines[("UNITTEST/BTC", ticker_interval)] = ticker_history exchange._klines[("UNITTEST/BTC", ticker_interval)] = ticker_history
dp = DataProvider(default_conf, exchange) dp = DataProvider(default_conf, exchange)
assert dp.runmode == RunMode.DRY_RUN assert dp.runmode == RunMode.DRY_RUN
assert ticker_history.equals(dp.ohlcv("UNITTEST/BTC", ticker_interval)) assert ticker_history.equals(dp.ohlcv("UNITTEST/BTC", ticker_interval))
@ -37,11 +38,9 @@ def test_ohlcv(mocker, default_conf, ticker_history):
def test_historic_ohlcv(mocker, default_conf, ticker_history): def test_historic_ohlcv(mocker, default_conf, ticker_history):
historymock = MagicMock(return_value=ticker_history) historymock = MagicMock(return_value=ticker_history)
mocker.patch("freqtrade.data.dataprovider.load_pair_history", historymock) mocker.patch("freqtrade.data.dataprovider.load_pair_history", historymock)
# exchange = get_patched_exchange(mocker, default_conf)
dp = DataProvider(default_conf, None) dp = DataProvider(default_conf, None)
data = dp.historic_ohlcv("UNITTEST/BTC", "5m") data = dp.historic_ohlcv("UNITTEST/BTC", "5m")
assert isinstance(data, DataFrame) assert isinstance(data, DataFrame)
@ -51,14 +50,47 @@ def test_historic_ohlcv(mocker, default_conf, ticker_history):
assert historymock.call_args_list[0][1]["ticker_interval"] == "5m" assert historymock.call_args_list[0][1]["ticker_interval"] == "5m"
def test_get_pair_dataframe(mocker, default_conf, ticker_history):
default_conf["runmode"] = RunMode.DRY_RUN
ticker_interval = default_conf["ticker_interval"]
exchange = get_patched_exchange(mocker, default_conf)
exchange._klines[("XRP/BTC", ticker_interval)] = ticker_history
exchange._klines[("UNITTEST/BTC", ticker_interval)] = ticker_history
dp = DataProvider(default_conf, exchange)
assert dp.runmode == RunMode.DRY_RUN
assert ticker_history.equals(dp.get_pair_dataframe("UNITTEST/BTC", ticker_interval))
assert isinstance(dp.get_pair_dataframe("UNITTEST/BTC", ticker_interval), DataFrame)
assert dp.get_pair_dataframe("UNITTEST/BTC", ticker_interval) is not ticker_history
assert not dp.get_pair_dataframe("UNITTEST/BTC", ticker_interval).empty
assert dp.get_pair_dataframe("NONESENSE/AAA", ticker_interval).empty
# Test with and without parameter
assert dp.get_pair_dataframe("UNITTEST/BTC",
ticker_interval).equals(dp.get_pair_dataframe("UNITTEST/BTC"))
default_conf["runmode"] = RunMode.LIVE
dp = DataProvider(default_conf, exchange)
assert dp.runmode == RunMode.LIVE
assert isinstance(dp.get_pair_dataframe("UNITTEST/BTC", ticker_interval), DataFrame)
assert dp.get_pair_dataframe("NONESENSE/AAA", ticker_interval).empty
historymock = MagicMock(return_value=ticker_history)
mocker.patch("freqtrade.data.dataprovider.load_pair_history", historymock)
default_conf["runmode"] = RunMode.BACKTEST
dp = DataProvider(default_conf, exchange)
assert dp.runmode == RunMode.BACKTEST
assert isinstance(dp.get_pair_dataframe("UNITTEST/BTC", ticker_interval), DataFrame)
# assert dp.get_pair_dataframe("NONESENSE/AAA", ticker_interval).empty
def test_available_pairs(mocker, default_conf, ticker_history): def test_available_pairs(mocker, default_conf, ticker_history):
exchange = get_patched_exchange(mocker, default_conf) exchange = get_patched_exchange(mocker, default_conf)
ticker_interval = default_conf["ticker_interval"] ticker_interval = default_conf["ticker_interval"]
exchange._klines[("XRP/BTC", ticker_interval)] = ticker_history exchange._klines[("XRP/BTC", ticker_interval)] = ticker_history
exchange._klines[("UNITTEST/BTC", ticker_interval)] = ticker_history exchange._klines[("UNITTEST/BTC", ticker_interval)] = ticker_history
dp = DataProvider(default_conf, exchange)
dp = DataProvider(default_conf, exchange)
assert len(dp.available_pairs) == 2 assert len(dp.available_pairs) == 2
assert dp.available_pairs == [ assert dp.available_pairs == [
("XRP/BTC", ticker_interval), ("XRP/BTC", ticker_interval),

View File

@ -64,8 +64,7 @@ def test_load_data_30min_ticker(mocker, caplog, default_conf) -> None:
assert isinstance(ld, DataFrame) assert isinstance(ld, DataFrame)
assert not log_has( assert not log_has(
'Download history data for pair: "UNITTEST/BTC", interval: 30m ' 'Download history data for pair: "UNITTEST/BTC", interval: 30m '
'and store in None.', 'and store in None.', caplog
caplog.record_tuples
) )
@ -76,21 +75,19 @@ def test_load_data_7min_ticker(mocker, caplog, default_conf) -> None:
assert log_has( assert log_has(
'No history data for pair: "UNITTEST/BTC", interval: 7m. ' 'No history data for pair: "UNITTEST/BTC", interval: 7m. '
'Use --refresh-pairs-cached option or download_backtest_data.py ' 'Use --refresh-pairs-cached option or download_backtest_data.py '
'script to download the data', 'script to download the data', caplog
caplog.record_tuples
) )
def test_load_data_1min_ticker(ticker_history, mocker, caplog) -> None: def test_load_data_1min_ticker(ticker_history, mocker, caplog) -> None:
mocker.patch('freqtrade.exchange.Exchange.get_history', return_value=ticker_history) mocker.patch('freqtrade.exchange.Exchange.get_historic_ohlcv', return_value=ticker_history)
file = os.path.join(os.path.dirname(__file__), '..', 'testdata', 'UNITTEST_BTC-1m.json') file = os.path.join(os.path.dirname(__file__), '..', 'testdata', 'UNITTEST_BTC-1m.json')
_backup_file(file, copy_file=True) _backup_file(file, copy_file=True)
history.load_data(datadir=None, ticker_interval='1m', pairs=['UNITTEST/BTC']) history.load_data(datadir=None, ticker_interval='1m', pairs=['UNITTEST/BTC'])
assert os.path.isfile(file) is True assert os.path.isfile(file) is True
assert not log_has( assert not log_has(
'Download history data for pair: "UNITTEST/BTC", interval: 1m ' 'Download history data for pair: "UNITTEST/BTC", interval: 1m '
'and store in None.', 'and store in None.', caplog
caplog.record_tuples
) )
_clean_test_file(file) _clean_test_file(file)
@ -99,7 +96,7 @@ def test_load_data_with_new_pair_1min(ticker_history_list, mocker, caplog, defau
""" """
Test load_pair_history() with 1 min ticker Test load_pair_history() with 1 min ticker
""" """
mocker.patch('freqtrade.exchange.Exchange.get_history', return_value=ticker_history_list) mocker.patch('freqtrade.exchange.Exchange.get_historic_ohlcv', return_value=ticker_history_list)
exchange = get_patched_exchange(mocker, default_conf) exchange = get_patched_exchange(mocker, default_conf)
file = os.path.join(os.path.dirname(__file__), '..', 'testdata', 'MEME_BTC-1m.json') file = os.path.join(os.path.dirname(__file__), '..', 'testdata', 'MEME_BTC-1m.json')
@ -113,8 +110,7 @@ def test_load_data_with_new_pair_1min(ticker_history_list, mocker, caplog, defau
assert log_has( assert log_has(
'No history data for pair: "MEME/BTC", interval: 1m. ' 'No history data for pair: "MEME/BTC", interval: 1m. '
'Use --refresh-pairs-cached option or download_backtest_data.py ' 'Use --refresh-pairs-cached option or download_backtest_data.py '
'script to download the data', 'script to download the data', caplog
caplog.record_tuples
) )
# download a new pair if refresh_pairs is set # download a new pair if refresh_pairs is set
@ -126,8 +122,7 @@ def test_load_data_with_new_pair_1min(ticker_history_list, mocker, caplog, defau
assert os.path.isfile(file) is True assert os.path.isfile(file) is True
assert log_has( assert log_has(
'Download history data for pair: "MEME/BTC", interval: 1m ' 'Download history data for pair: "MEME/BTC", interval: 1m '
'and store in None.', 'and store in None.', caplog
caplog.record_tuples
) )
with pytest.raises(OperationalException, match=r'Exchange needs to be initialized when.*'): with pytest.raises(OperationalException, match=r'Exchange needs to be initialized when.*'):
history.load_pair_history(datadir=None, history.load_pair_history(datadir=None,
@ -149,7 +144,7 @@ def test_load_data_live(default_conf, mocker, caplog) -> None:
exchange=exchange) exchange=exchange)
assert refresh_mock.call_count == 1 assert refresh_mock.call_count == 1
assert len(refresh_mock.call_args_list[0][0][0]) == 2 assert len(refresh_mock.call_args_list[0][0][0]) == 2
assert log_has('Live: Downloading data for all defined pairs ...', caplog.record_tuples) assert log_has('Live: Downloading data for all defined pairs ...', caplog)
def test_load_data_live_noexchange(default_conf, mocker, caplog) -> None: def test_load_data_live_noexchange(default_conf, mocker, caplog) -> None:
@ -271,7 +266,7 @@ def test_load_cached_data_for_updating(mocker) -> None:
def test_download_pair_history(ticker_history_list, mocker, default_conf) -> None: def test_download_pair_history(ticker_history_list, mocker, default_conf) -> None:
mocker.patch('freqtrade.exchange.Exchange.get_history', return_value=ticker_history_list) mocker.patch('freqtrade.exchange.Exchange.get_historic_ohlcv', return_value=ticker_history_list)
exchange = get_patched_exchange(mocker, default_conf) exchange = get_patched_exchange(mocker, default_conf)
file1_1 = os.path.join(os.path.dirname(__file__), '..', 'testdata', 'MEME_BTC-1m.json') file1_1 = os.path.join(os.path.dirname(__file__), '..', 'testdata', 'MEME_BTC-1m.json')
file1_5 = os.path.join(os.path.dirname(__file__), '..', 'testdata', 'MEME_BTC-5m.json') file1_5 = os.path.join(os.path.dirname(__file__), '..', 'testdata', 'MEME_BTC-5m.json')
@ -324,7 +319,7 @@ def test_download_pair_history2(mocker, default_conf) -> None:
[1509836580000, 0.00161, 0.00161, 0.00161, 0.00161, 82.390199] [1509836580000, 0.00161, 0.00161, 0.00161, 0.00161, 82.390199]
] ]
json_dump_mock = mocker.patch('freqtrade.misc.file_dump_json', return_value=None) json_dump_mock = mocker.patch('freqtrade.misc.file_dump_json', return_value=None)
mocker.patch('freqtrade.exchange.Exchange.get_history', return_value=tick) mocker.patch('freqtrade.exchange.Exchange.get_historic_ohlcv', return_value=tick)
exchange = get_patched_exchange(mocker, default_conf) exchange = get_patched_exchange(mocker, default_conf)
download_pair_history(None, exchange, pair="UNITTEST/BTC", ticker_interval='1m') download_pair_history(None, exchange, pair="UNITTEST/BTC", ticker_interval='1m')
download_pair_history(None, exchange, pair="UNITTEST/BTC", ticker_interval='3m') download_pair_history(None, exchange, pair="UNITTEST/BTC", ticker_interval='3m')
@ -332,7 +327,7 @@ def test_download_pair_history2(mocker, default_conf) -> None:
def test_download_backtesting_data_exception(ticker_history, mocker, caplog, default_conf) -> None: def test_download_backtesting_data_exception(ticker_history, mocker, caplog, default_conf) -> None:
mocker.patch('freqtrade.exchange.Exchange.get_history', mocker.patch('freqtrade.exchange.Exchange.get_historic_ohlcv',
side_effect=Exception('File Error')) side_effect=Exception('File Error'))
exchange = get_patched_exchange(mocker, default_conf) exchange = get_patched_exchange(mocker, default_conf)
@ -350,8 +345,7 @@ def test_download_backtesting_data_exception(ticker_history, mocker, caplog, def
_clean_test_file(file1_5) _clean_test_file(file1_5)
assert log_has( assert log_has(
'Failed to download history data for pair: "MEME/BTC", interval: 1m. ' 'Failed to download history data for pair: "MEME/BTC", interval: 1m. '
'Error: File Error', 'Error: File Error', caplog
caplog.record_tuples
) )
@ -380,7 +374,7 @@ def test_load_partial_missing(caplog) -> None:
start_real = tickerdata['UNITTEST/BTC'].iloc[0, 0] start_real = tickerdata['UNITTEST/BTC'].iloc[0, 0]
assert log_has(f'Missing data at start for pair ' assert log_has(f'Missing data at start for pair '
f'UNITTEST/BTC, data starts at {start_real.strftime("%Y-%m-%d %H:%M:%S")}', f'UNITTEST/BTC, data starts at {start_real.strftime("%Y-%m-%d %H:%M:%S")}',
caplog.record_tuples) caplog)
# Make sure we start fresh - test missing data at end # Make sure we start fresh - test missing data at end
caplog.clear() caplog.clear()
start = arrow.get('2018-01-10T00:00:00') start = arrow.get('2018-01-10T00:00:00')
@ -396,7 +390,7 @@ def test_load_partial_missing(caplog) -> None:
end_real = arrow.get(tickerdata['UNITTEST/BTC'].iloc[-1, 0]).shift(minutes=5) end_real = arrow.get(tickerdata['UNITTEST/BTC'].iloc[-1, 0]).shift(minutes=5)
assert log_has(f'Missing data at end for pair ' assert log_has(f'Missing data at end for pair '
f'UNITTEST/BTC, data ends at {end_real.strftime("%Y-%m-%d %H:%M:%S")}', f'UNITTEST/BTC, data ends at {end_real.strftime("%Y-%m-%d %H:%M:%S")}',
caplog.record_tuples) caplog)
def test_init(default_conf, mocker) -> None: def test_init(default_conf, mocker) -> None:
@ -560,7 +554,7 @@ def test_validate_backtest_data_warn(default_conf, mocker, caplog) -> None:
assert len(caplog.record_tuples) == 1 assert len(caplog.record_tuples) == 1
assert log_has( assert log_has(
"UNITTEST/BTC has missing frames: expected 14396, got 13680, that's 716 missing values", "UNITTEST/BTC has missing frames: expected 14396, got 13680, that's 716 missing values",
caplog.record_tuples) caplog)
def test_validate_backtest_data(default_conf, mocker, caplog) -> None: def test_validate_backtest_data(default_conf, mocker, caplog) -> None:

View File

@ -311,7 +311,7 @@ def test_edge_process_no_data(mocker, edge_conf, caplog):
assert not edge.calculate() assert not edge.calculate()
assert len(edge._cached_pairs) == 0 assert len(edge._cached_pairs) == 0
assert log_has("No data found. Edge is stopped ...", caplog.record_tuples) assert log_has("No data found. Edge is stopped ...", caplog)
assert edge._last_updated == 0 assert edge._last_updated == 0
@ -326,7 +326,7 @@ def test_edge_process_no_trades(mocker, edge_conf, caplog):
assert not edge.calculate() assert not edge.calculate()
assert len(edge._cached_pairs) == 0 assert len(edge._cached_pairs) == 0
assert log_has("No trades found.", caplog.record_tuples) assert log_has("No trades found.", caplog)
def test_edge_init_error(mocker, edge_conf,): def test_edge_init_error(mocker, edge_conf,):

View File

@ -14,7 +14,11 @@ from pandas import DataFrame
from freqtrade import (DependencyException, InvalidOrderException, from freqtrade import (DependencyException, InvalidOrderException,
OperationalException, TemporaryError) OperationalException, TemporaryError)
from freqtrade.exchange import Binance, Exchange, Kraken from freqtrade.exchange import Binance, Exchange, Kraken
from freqtrade.exchange.exchange import API_RETRY_COUNT from freqtrade.exchange.exchange import (API_RETRY_COUNT, timeframe_to_minutes,
timeframe_to_msecs,
timeframe_to_next_date,
timeframe_to_prev_date,
timeframe_to_seconds)
from freqtrade.resolvers.exchange_resolver import ExchangeResolver from freqtrade.resolvers.exchange_resolver import ExchangeResolver
from freqtrade.tests.conftest import get_patched_exchange, log_has, log_has_re from freqtrade.tests.conftest import get_patched_exchange, log_has, log_has_re
@ -62,7 +66,7 @@ async def async_ccxt_exception(mocker, default_conf, api_mock, fun, mock_ccxt_fu
def test_init(default_conf, mocker, caplog): def test_init(default_conf, mocker, caplog):
caplog.set_level(logging.INFO) caplog.set_level(logging.INFO)
get_patched_exchange(mocker, default_conf) get_patched_exchange(mocker, default_conf)
assert log_has('Instance is running with dry_run enabled', caplog.record_tuples) assert log_has('Instance is running with dry_run enabled', caplog)
def test_init_ccxt_kwargs(default_conf, mocker, caplog): def test_init_ccxt_kwargs(default_conf, mocker, caplog):
@ -71,8 +75,7 @@ def test_init_ccxt_kwargs(default_conf, mocker, caplog):
conf = copy.deepcopy(default_conf) conf = copy.deepcopy(default_conf)
conf['exchange']['ccxt_async_config'] = {'aiohttp_trust_env': True} conf['exchange']['ccxt_async_config'] = {'aiohttp_trust_env': True}
ex = Exchange(conf) ex = Exchange(conf)
assert log_has("Applying additional ccxt config: {'aiohttp_trust_env': True}", assert log_has("Applying additional ccxt config: {'aiohttp_trust_env': True}", caplog)
caplog.record_tuples)
assert ex._api_async.aiohttp_trust_env assert ex._api_async.aiohttp_trust_env
assert not ex._api.aiohttp_trust_env assert not ex._api.aiohttp_trust_env
@ -81,20 +84,18 @@ def test_init_ccxt_kwargs(default_conf, mocker, caplog):
conf = copy.deepcopy(default_conf) conf = copy.deepcopy(default_conf)
conf['exchange']['ccxt_config'] = {'TestKWARG': 11} conf['exchange']['ccxt_config'] = {'TestKWARG': 11}
ex = Exchange(conf) ex = Exchange(conf)
assert not log_has("Applying additional ccxt config: {'aiohttp_trust_env': True}", assert not log_has("Applying additional ccxt config: {'aiohttp_trust_env': True}", caplog)
caplog.record_tuples)
assert not ex._api_async.aiohttp_trust_env assert not ex._api_async.aiohttp_trust_env
assert hasattr(ex._api, 'TestKWARG') assert hasattr(ex._api, 'TestKWARG')
assert ex._api.TestKWARG == 11 assert ex._api.TestKWARG == 11
assert not hasattr(ex._api_async, 'TestKWARG') assert not hasattr(ex._api_async, 'TestKWARG')
assert log_has("Applying additional ccxt config: {'TestKWARG': 11}", assert log_has("Applying additional ccxt config: {'TestKWARG': 11}", caplog)
caplog.record_tuples)
def test_destroy(default_conf, mocker, caplog): def test_destroy(default_conf, mocker, caplog):
caplog.set_level(logging.DEBUG) caplog.set_level(logging.DEBUG)
get_patched_exchange(mocker, default_conf) get_patched_exchange(mocker, default_conf)
assert log_has('Exchange object destroyed, closing async loop', caplog.record_tuples) assert log_has('Exchange object destroyed, closing async loop', caplog)
def test_init_exception(default_conf, mocker): def test_init_exception(default_conf, mocker):
@ -120,8 +121,7 @@ def test_exchange_resolver(default_conf, mocker, caplog):
mocker.patch('freqtrade.exchange.Exchange.validate_timeframes', MagicMock()) mocker.patch('freqtrade.exchange.Exchange.validate_timeframes', MagicMock())
exchange = ExchangeResolver('Bittrex', default_conf).exchange exchange = ExchangeResolver('Bittrex', default_conf).exchange
assert isinstance(exchange, Exchange) assert isinstance(exchange, Exchange)
assert log_has_re(r"No .* specific subclass found. Using the generic class instead.", assert log_has_re(r"No .* specific subclass found. Using the generic class instead.", caplog)
caplog.record_tuples)
caplog.clear() caplog.clear()
exchange = ExchangeResolver('kraken', default_conf).exchange exchange = ExchangeResolver('kraken', default_conf).exchange
@ -129,7 +129,7 @@ def test_exchange_resolver(default_conf, mocker, caplog):
assert isinstance(exchange, Kraken) assert isinstance(exchange, Kraken)
assert not isinstance(exchange, Binance) assert not isinstance(exchange, Binance)
assert not log_has_re(r"No .* specific subclass found. Using the generic class instead.", assert not log_has_re(r"No .* specific subclass found. Using the generic class instead.",
caplog.record_tuples) caplog)
exchange = ExchangeResolver('binance', default_conf).exchange exchange = ExchangeResolver('binance', default_conf).exchange
assert isinstance(exchange, Exchange) assert isinstance(exchange, Exchange)
@ -137,7 +137,7 @@ def test_exchange_resolver(default_conf, mocker, caplog):
assert not isinstance(exchange, Kraken) assert not isinstance(exchange, Kraken)
assert not log_has_re(r"No .* specific subclass found. Using the generic class instead.", assert not log_has_re(r"No .* specific subclass found. Using the generic class instead.",
caplog.record_tuples) caplog)
def test_validate_order_time_in_force(default_conf, mocker, caplog): def test_validate_order_time_in_force(default_conf, mocker, caplog):
@ -249,8 +249,7 @@ def test__load_async_markets(default_conf, mocker, caplog):
exchange._api_async.load_markets = Mock(side_effect=ccxt.BaseError("deadbeef")) exchange._api_async.load_markets = Mock(side_effect=ccxt.BaseError("deadbeef"))
exchange._load_async_markets() exchange._load_async_markets()
assert log_has('Could not load async markets. Reason: deadbeef', assert log_has('Could not load async markets. Reason: deadbeef', caplog)
caplog.record_tuples)
def test__load_markets(default_conf, mocker, caplog): def test__load_markets(default_conf, mocker, caplog):
@ -262,7 +261,7 @@ def test__load_markets(default_conf, mocker, caplog):
mocker.patch('freqtrade.exchange.Exchange.validate_timeframes', MagicMock()) mocker.patch('freqtrade.exchange.Exchange.validate_timeframes', MagicMock())
mocker.patch('freqtrade.exchange.Exchange._load_async_markets', MagicMock()) mocker.patch('freqtrade.exchange.Exchange._load_async_markets', MagicMock())
Exchange(default_conf) Exchange(default_conf)
assert log_has('Unable to initialize markets. Reason: SomeError', caplog.record_tuples) assert log_has('Unable to initialize markets. Reason: SomeError', caplog)
expected_return = {'ETH/BTC': 'available'} expected_return = {'ETH/BTC': 'available'}
api_mock = MagicMock() api_mock = MagicMock()
@ -298,7 +297,7 @@ def test__reload_markets(default_conf, mocker, caplog):
exchange._last_markets_refresh = arrow.utcnow().timestamp - 15 * 60 exchange._last_markets_refresh = arrow.utcnow().timestamp - 15 * 60
exchange._reload_markets() exchange._reload_markets()
assert exchange.markets == updated_markets assert exchange.markets == updated_markets
assert log_has('Performing scheduled market reload..', caplog.record_tuples) assert log_has('Performing scheduled market reload..', caplog)
def test__reload_markets_exception(default_conf, mocker, caplog): def test__reload_markets_exception(default_conf, mocker, caplog):
@ -312,7 +311,7 @@ def test__reload_markets_exception(default_conf, mocker, caplog):
# less than 10 minutes have passed, no reload # less than 10 minutes have passed, no reload
exchange._reload_markets() exchange._reload_markets()
assert exchange._last_markets_refresh == 0 assert exchange._last_markets_refresh == 0
assert log_has_re(r"Could not reload markets.*", caplog.record_tuples) assert log_has_re(r"Could not reload markets.*", caplog)
def test_validate_pairs(default_conf, mocker): # test exchange.validate_pairs directly def test_validate_pairs(default_conf, mocker): # test exchange.validate_pairs directly
@ -357,8 +356,7 @@ def test_validate_pairs_exception(default_conf, mocker, caplog):
mocker.patch('freqtrade.exchange.Exchange.markets', PropertyMock(return_value={})) mocker.patch('freqtrade.exchange.Exchange.markets', PropertyMock(return_value={}))
Exchange(default_conf) Exchange(default_conf)
assert log_has('Unable to validate pairs (assuming they are correct).', assert log_has('Unable to validate pairs (assuming they are correct).', caplog)
caplog.record_tuples)
def test_validate_pairs_restricted(default_conf, mocker, caplog): def test_validate_pairs_restricted(default_conf, mocker, caplog):
@ -374,8 +372,7 @@ def test_validate_pairs_restricted(default_conf, mocker, caplog):
Exchange(default_conf) Exchange(default_conf)
assert log_has(f"Pair XRP/BTC is restricted for some users on this exchange." assert log_has(f"Pair XRP/BTC is restricted for some users on this exchange."
f"Please check if you are impacted by this restriction " f"Please check if you are impacted by this restriction "
f"on the exchange and eventually remove XRP/BTC from your whitelist.", f"on the exchange and eventually remove XRP/BTC from your whitelist.", caplog)
caplog.record_tuples)
def test_validate_timeframes(default_conf, mocker): def test_validate_timeframes(default_conf, mocker):
@ -1003,7 +1000,7 @@ def test_get_ticker(default_conf, mocker, exchange_name):
@pytest.mark.parametrize("exchange_name", EXCHANGES) @pytest.mark.parametrize("exchange_name", EXCHANGES)
def test_get_history(default_conf, mocker, caplog, exchange_name): def test_get_historic_ohlcv(default_conf, mocker, caplog, exchange_name):
exchange = get_patched_exchange(mocker, default_conf, id=exchange_name) exchange = get_patched_exchange(mocker, default_conf, id=exchange_name)
tick = [ tick = [
[ [
@ -1024,7 +1021,7 @@ def test_get_history(default_conf, mocker, caplog, exchange_name):
# one_call calculation * 1.8 should do 2 calls # one_call calculation * 1.8 should do 2 calls
since = 5 * 60 * 500 * 1.8 since = 5 * 60 * 500 * 1.8
print(f"since = {since}") print(f"since = {since}")
ret = exchange.get_history(pair, "5m", int((arrow.utcnow().timestamp - since) * 1000)) ret = exchange.get_historic_ohlcv(pair, "5m", int((arrow.utcnow().timestamp - since) * 1000))
assert exchange._async_get_candle_history.call_count == 2 assert exchange._async_get_candle_history.call_count == 2
# Returns twice the above tick # Returns twice the above tick
@ -1060,7 +1057,7 @@ def test_refresh_latest_ohlcv(mocker, default_conf, caplog) -> None:
assert not exchange._klines assert not exchange._klines
exchange.refresh_latest_ohlcv(pairs) exchange.refresh_latest_ohlcv(pairs)
assert log_has(f'Refreshing ohlcv data for {len(pairs)} pairs', caplog.record_tuples) assert log_has(f'Refreshing ohlcv data for {len(pairs)} pairs', caplog)
assert exchange._klines assert exchange._klines
assert exchange._api_async.fetch_ohlcv.call_count == 2 assert exchange._api_async.fetch_ohlcv.call_count == 2
for pair in pairs: for pair in pairs:
@ -1079,7 +1076,7 @@ def test_refresh_latest_ohlcv(mocker, default_conf, caplog) -> None:
assert exchange._api_async.fetch_ohlcv.call_count == 2 assert exchange._api_async.fetch_ohlcv.call_count == 2
assert log_has(f"Using cached ohlcv data for pair {pairs[0][0]}, interval {pairs[0][1]} ...", assert log_has(f"Using cached ohlcv data for pair {pairs[0][0]}, interval {pairs[0][1]} ...",
caplog.record_tuples) caplog)
@pytest.mark.asyncio @pytest.mark.asyncio
@ -1109,7 +1106,7 @@ async def test__async_get_candle_history(default_conf, mocker, caplog, exchange_
assert res[1] == "5m" assert res[1] == "5m"
assert res[2] == tick assert res[2] == tick
assert exchange._api_async.fetch_ohlcv.call_count == 1 assert exchange._api_async.fetch_ohlcv.call_count == 1
assert not log_has(f"Using cached ohlcv data for {pair} ...", caplog.record_tuples) assert not log_has(f"Using cached ohlcv data for {pair} ...", caplog)
# exchange = Exchange(default_conf) # exchange = Exchange(default_conf)
await async_ccxt_exception(mocker, default_conf, MagicMock(), await async_ccxt_exception(mocker, default_conf, MagicMock(),
@ -1168,8 +1165,8 @@ def test_refresh_latest_ohlcv_inv_result(default_conf, mocker, caplog):
# Test that each is in list at least once as order is not guaranteed # Test that each is in list at least once as order is not guaranteed
assert type(res[0]) is tuple or type(res[1]) is tuple assert type(res[0]) is tuple or type(res[1]) is tuple
assert type(res[0]) is TypeError or type(res[1]) is TypeError assert type(res[0]) is TypeError or type(res[1]) is TypeError
assert log_has("Error loading ETH/BTC. Result was [[]].", caplog.record_tuples) assert log_has("Error loading ETH/BTC. Result was [[]].", caplog)
assert log_has("Async code raised an exception: TypeError", caplog.record_tuples) assert log_has("Async code raised an exception: TypeError", caplog)
@pytest.mark.parametrize("exchange_name", EXCHANGES) @pytest.mark.parametrize("exchange_name", EXCHANGES)
@ -1547,3 +1544,74 @@ def test_get_valid_pair_combination(default_conf, mocker, markets):
assert ex.get_valid_pair_combination("BTC", "ETH") == "ETH/BTC" assert ex.get_valid_pair_combination("BTC", "ETH") == "ETH/BTC"
with pytest.raises(DependencyException, match=r"Could not combine.* to get a valid pair."): with pytest.raises(DependencyException, match=r"Could not combine.* to get a valid pair."):
ex.get_valid_pair_combination("NOPAIR", "ETH") ex.get_valid_pair_combination("NOPAIR", "ETH")
def test_timeframe_to_minutes():
assert timeframe_to_minutes("5m") == 5
assert timeframe_to_minutes("10m") == 10
assert timeframe_to_minutes("1h") == 60
assert timeframe_to_minutes("1d") == 1440
def test_timeframe_to_seconds():
assert timeframe_to_seconds("5m") == 300
assert timeframe_to_seconds("10m") == 600
assert timeframe_to_seconds("1h") == 3600
assert timeframe_to_seconds("1d") == 86400
def test_timeframe_to_msecs():
assert timeframe_to_msecs("5m") == 300000
assert timeframe_to_msecs("10m") == 600000
assert timeframe_to_msecs("1h") == 3600000
assert timeframe_to_msecs("1d") == 86400000
def test_timeframe_to_prev_date():
# 2019-08-12 13:22:08
date = datetime.fromtimestamp(1565616128, tz=timezone.utc)
tf_list = [
# 5m -> 2019-08-12 13:20:00
("5m", datetime(2019, 8, 12, 13, 20, 0, tzinfo=timezone.utc)),
# 10m -> 2019-08-12 13:20:00
("10m", datetime(2019, 8, 12, 13, 20, 0, tzinfo=timezone.utc)),
# 1h -> 2019-08-12 13:00:00
("1h", datetime(2019, 8, 12, 13, 00, 0, tzinfo=timezone.utc)),
# 2h -> 2019-08-12 12:00:00
("2h", datetime(2019, 8, 12, 12, 00, 0, tzinfo=timezone.utc)),
# 4h -> 2019-08-12 12:00:00
("4h", datetime(2019, 8, 12, 12, 00, 0, tzinfo=timezone.utc)),
# 1d -> 2019-08-12 00:00:00
("1d", datetime(2019, 8, 12, 00, 00, 0, tzinfo=timezone.utc)),
]
for interval, result in tf_list:
assert timeframe_to_prev_date(interval, date) == result
date = datetime.now(tz=timezone.utc)
assert timeframe_to_prev_date("5m", date) < date
def test_timeframe_to_next_date():
# 2019-08-12 13:22:08
date = datetime.fromtimestamp(1565616128, tz=timezone.utc)
tf_list = [
# 5m -> 2019-08-12 13:25:00
("5m", datetime(2019, 8, 12, 13, 25, 0, tzinfo=timezone.utc)),
# 10m -> 2019-08-12 13:30:00
("10m", datetime(2019, 8, 12, 13, 30, 0, tzinfo=timezone.utc)),
# 1h -> 2019-08-12 14:00:00
("1h", datetime(2019, 8, 12, 14, 00, 0, tzinfo=timezone.utc)),
# 2h -> 2019-08-12 14:00:00
("2h", datetime(2019, 8, 12, 14, 00, 0, tzinfo=timezone.utc)),
# 4h -> 2019-08-12 14:00:00
("4h", datetime(2019, 8, 12, 16, 00, 0, tzinfo=timezone.utc)),
# 1d -> 2019-08-13 00:00:00
("1d", datetime(2019, 8, 13, 0, 0, 0, tzinfo=timezone.utc)),
]
for interval, result in tf_list:
assert timeframe_to_next_date(interval, date) == result
date = datetime.now(tz=timezone.utc)
assert timeframe_to_next_date("5m", date) > date

View File

@ -181,21 +181,18 @@ def test_setup_configuration_without_arguments(mocker, default_conf, caplog) ->
assert 'exchange' in config assert 'exchange' in config
assert 'pair_whitelist' in config['exchange'] assert 'pair_whitelist' in config['exchange']
assert 'datadir' in config assert 'datadir' in config
assert log_has( assert log_has('Using data directory: {} ...'.format(config['datadir']), caplog)
'Using data directory: {} ...'.format(config['datadir']),
caplog.record_tuples
)
assert 'ticker_interval' in config assert 'ticker_interval' in config
assert not log_has_re('Parameter -i/--ticker-interval detected .*', caplog.record_tuples) assert not log_has_re('Parameter -i/--ticker-interval detected .*', caplog)
assert 'live' not in config assert 'live' not in config
assert not log_has('Parameter -l/--live detected ...', caplog.record_tuples) assert not log_has('Parameter -l/--live detected ...', caplog)
assert 'position_stacking' not in config assert 'position_stacking' not in config
assert not log_has('Parameter --enable-position-stacking detected ...', caplog.record_tuples) assert not log_has('Parameter --enable-position-stacking detected ...', caplog)
assert 'refresh_pairs' not in config assert 'refresh_pairs' not in config
assert not log_has('Parameter -r/--refresh-pairs-cached detected ...', caplog.record_tuples) assert not log_has('Parameter -r/--refresh-pairs-cached detected ...', caplog)
assert 'timerange' not in config assert 'timerange' not in config
assert 'export' not in config assert 'export' not in config
@ -235,43 +232,31 @@ def test_setup_bt_configuration_with_arguments(mocker, default_conf, caplog) ->
assert 'datadir' in config assert 'datadir' in config
assert config['runmode'] == RunMode.BACKTEST assert config['runmode'] == RunMode.BACKTEST
assert log_has( assert log_has('Using data directory: {} ...'.format(config['datadir']), caplog)
'Using data directory: {} ...'.format(config['datadir']),
caplog.record_tuples
)
assert 'ticker_interval' in config assert 'ticker_interval' in config
assert log_has('Parameter -i/--ticker-interval detected ... Using ticker_interval: 1m ...', assert log_has('Parameter -i/--ticker-interval detected ... Using ticker_interval: 1m ...',
caplog.record_tuples) caplog)
assert 'live' in config assert 'live' in config
assert log_has('Parameter -l/--live detected ...', caplog.record_tuples) assert log_has('Parameter -l/--live detected ...', caplog)
assert 'position_stacking' in config assert 'position_stacking' in config
assert log_has('Parameter --enable-position-stacking detected ...', caplog.record_tuples) assert log_has('Parameter --enable-position-stacking detected ...', caplog)
assert 'use_max_market_positions' in config assert 'use_max_market_positions' in config
assert log_has('Parameter --disable-max-market-positions detected ...', caplog.record_tuples) assert log_has('Parameter --disable-max-market-positions detected ...', caplog)
assert log_has('max_open_trades set to unlimited ...', caplog.record_tuples) assert log_has('max_open_trades set to unlimited ...', caplog)
assert 'refresh_pairs' in config assert 'refresh_pairs' in config
assert log_has('Parameter -r/--refresh-pairs-cached detected ...', caplog.record_tuples) assert log_has('Parameter -r/--refresh-pairs-cached detected ...', caplog)
assert 'timerange' in config assert 'timerange' in config
assert log_has( assert log_has('Parameter --timerange detected: {} ...'.format(config['timerange']), caplog)
'Parameter --timerange detected: {} ...'.format(config['timerange']),
caplog.record_tuples
)
assert 'export' in config assert 'export' in config
assert log_has( assert log_has('Parameter --export detected: {} ...'.format(config['export']), caplog)
'Parameter --export detected: {} ...'.format(config['export']),
caplog.record_tuples
)
assert 'exportfilename' in config assert 'exportfilename' in config
assert log_has( assert log_has('Storing backtest results to {} ...'.format(config['exportfilename']), caplog)
'Storing backtest results to {} ...'.format(config['exportfilename']),
caplog.record_tuples
)
def test_setup_configuration_unlimited_stake_amount(mocker, default_conf, caplog) -> None: def test_setup_configuration_unlimited_stake_amount(mocker, default_conf, caplog) -> None:
@ -303,10 +288,7 @@ def test_start(mocker, fee, default_conf, caplog) -> None:
] ]
args = get_args(args) args = get_args(args)
start_backtesting(args) start_backtesting(args)
assert log_has( assert log_has('Starting freqtrade in Backtesting mode', caplog)
'Starting freqtrade in Backtesting mode',
caplog.record_tuples
)
assert start_mock.call_count == 1 assert start_mock.call_count == 1
@ -360,7 +342,7 @@ def test_backtesting_init_no_ticker_interval(mocker, default_conf, caplog) -> No
with pytest.raises(OperationalException): with pytest.raises(OperationalException):
Backtesting(default_conf) Backtesting(default_conf)
log_has("Ticker-interval needs to be set in either configuration " log_has("Ticker-interval needs to be set in either configuration "
"or as cli argument `--ticker-interval 5m`", caplog.record_tuples) "or as cli argument `--ticker-interval 5m`", caplog)
def test_tickerdata_to_dataframe_bt(default_conf, mocker) -> None: def test_tickerdata_to_dataframe_bt(default_conf, mocker) -> None:
@ -511,7 +493,7 @@ def test_backtesting_start(default_conf, mocker, caplog) -> None:
'up to 2017-11-14T22:59:00+00:00 (0 days)..' 'up to 2017-11-14T22:59:00+00:00 (0 days)..'
] ]
for line in exists: for line in exists:
assert log_has(line, caplog.record_tuples) assert log_has(line, caplog)
def test_backtesting_start_no_data(default_conf, mocker, caplog) -> None: def test_backtesting_start_no_data(default_conf, mocker, caplog) -> None:
@ -539,7 +521,7 @@ def test_backtesting_start_no_data(default_conf, mocker, caplog) -> None:
backtesting.start() backtesting.start()
# check the logs, that will contain the backtest result # check the logs, that will contain the backtest result
assert log_has('No data found. Terminating.', caplog.record_tuples) assert log_has('No data found. Terminating.', caplog)
def test_backtest(default_conf, fee, mocker) -> None: def test_backtest(default_conf, fee, mocker) -> None:
@ -876,7 +858,7 @@ def test_backtest_start_live(default_conf, mocker, caplog):
] ]
for line in exists: for line in exists:
assert log_has(line, caplog.record_tuples) assert log_has(line, caplog)
@pytest.mark.filterwarnings("ignore:DEPRECATED") @pytest.mark.filterwarnings("ignore:DEPRECATED")
@ -936,4 +918,4 @@ def test_backtest_start_multi_strat(default_conf, mocker, caplog):
] ]
for line in exists: for line in exists:
assert log_has(line, caplog.record_tuples) assert log_has(line, caplog)

View File

@ -29,15 +29,12 @@ def test_setup_configuration_without_arguments(mocker, default_conf, caplog) ->
assert 'exchange' in config assert 'exchange' in config
assert 'pair_whitelist' in config['exchange'] assert 'pair_whitelist' in config['exchange']
assert 'datadir' in config assert 'datadir' in config
assert log_has( assert log_has('Using data directory: {} ...'.format(config['datadir']), caplog)
'Using data directory: {} ...'.format(config['datadir']),
caplog.record_tuples
)
assert 'ticker_interval' in config assert 'ticker_interval' in config
assert not log_has_re('Parameter -i/--ticker-interval detected .*', caplog.record_tuples) assert not log_has_re('Parameter -i/--ticker-interval detected .*', caplog)
assert 'refresh_pairs' not in config assert 'refresh_pairs' not in config
assert not log_has('Parameter -r/--refresh-pairs-cached detected ...', caplog.record_tuples) assert not log_has('Parameter -r/--refresh-pairs-cached detected ...', caplog)
assert 'timerange' not in config assert 'timerange' not in config
assert 'stoploss_range' not in config assert 'stoploss_range' not in config
@ -69,21 +66,15 @@ def test_setup_edge_configuration_with_arguments(mocker, edge_conf, caplog) -> N
assert 'pair_whitelist' in config['exchange'] assert 'pair_whitelist' in config['exchange']
assert 'datadir' in config assert 'datadir' in config
assert config['runmode'] == RunMode.EDGE assert config['runmode'] == RunMode.EDGE
assert log_has( assert log_has('Using data directory: {} ...'.format(config['datadir']), caplog)
'Using data directory: {} ...'.format(config['datadir']),
caplog.record_tuples
)
assert 'ticker_interval' in config assert 'ticker_interval' in config
assert log_has('Parameter -i/--ticker-interval detected ... Using ticker_interval: 1m ...', assert log_has('Parameter -i/--ticker-interval detected ... Using ticker_interval: 1m ...',
caplog.record_tuples) caplog)
assert 'refresh_pairs' in config assert 'refresh_pairs' in config
assert log_has('Parameter -r/--refresh-pairs-cached detected ...', caplog.record_tuples) assert log_has('Parameter -r/--refresh-pairs-cached detected ...', caplog)
assert 'timerange' in config assert 'timerange' in config
assert log_has( assert log_has('Parameter --timerange detected: {} ...'.format(config['timerange']), caplog)
'Parameter --timerange detected: {} ...'.format(config['timerange']),
caplog.record_tuples
)
def test_start(mocker, fee, edge_conf, caplog) -> None: def test_start(mocker, fee, edge_conf, caplog) -> None:
@ -100,10 +91,7 @@ def test_start(mocker, fee, edge_conf, caplog) -> None:
] ]
args = get_args(args) args = get_args(args)
start_edge(args) start_edge(args)
assert log_has( assert log_has('Starting freqtrade in Edge mode', caplog)
'Starting freqtrade in Edge mode',
caplog.record_tuples
)
assert start_mock.call_count == 1 assert start_mock.call_count == 1

View File

@ -82,21 +82,18 @@ def test_setup_hyperopt_configuration_without_arguments(mocker, default_conf, ca
assert 'exchange' in config assert 'exchange' in config
assert 'pair_whitelist' in config['exchange'] assert 'pair_whitelist' in config['exchange']
assert 'datadir' in config assert 'datadir' in config
assert log_has( assert log_has('Using data directory: {} ...'.format(config['datadir']), caplog)
'Using data directory: {} ...'.format(config['datadir']),
caplog.record_tuples
)
assert 'ticker_interval' in config assert 'ticker_interval' in config
assert not log_has_re('Parameter -i/--ticker-interval detected .*', caplog.record_tuples) assert not log_has_re('Parameter -i/--ticker-interval detected .*', caplog)
assert 'live' not in config assert 'live' not in config
assert not log_has('Parameter -l/--live detected ...', caplog.record_tuples) assert not log_has('Parameter -l/--live detected ...', caplog)
assert 'position_stacking' not in config assert 'position_stacking' not in config
assert not log_has('Parameter --enable-position-stacking detected ...', caplog.record_tuples) assert not log_has('Parameter --enable-position-stacking detected ...', caplog)
assert 'refresh_pairs' not in config assert 'refresh_pairs' not in config
assert not log_has('Parameter -r/--refresh-pairs-cached detected ...', caplog.record_tuples) assert not log_has('Parameter -r/--refresh-pairs-cached detected ...', caplog)
assert 'timerange' not in config assert 'timerange' not in config
assert 'runmode' in config assert 'runmode' in config
@ -133,41 +130,32 @@ def test_setup_hyperopt_configuration_with_arguments(mocker, default_conf, caplo
assert 'datadir' in config assert 'datadir' in config
assert config['runmode'] == RunMode.HYPEROPT assert config['runmode'] == RunMode.HYPEROPT
assert log_has( assert log_has('Using data directory: {} ...'.format(config['datadir']), caplog)
'Using data directory: {} ...'.format(config['datadir']),
caplog.record_tuples
)
assert 'ticker_interval' in config assert 'ticker_interval' in config
assert log_has('Parameter -i/--ticker-interval detected ... Using ticker_interval: 1m ...', assert log_has('Parameter -i/--ticker-interval detected ... Using ticker_interval: 1m ...',
caplog.record_tuples) caplog)
assert 'position_stacking' in config assert 'position_stacking' in config
assert log_has('Parameter --enable-position-stacking detected ...', caplog.record_tuples) assert log_has('Parameter --enable-position-stacking detected ...', caplog)
assert 'use_max_market_positions' in config assert 'use_max_market_positions' in config
assert log_has('Parameter --disable-max-market-positions detected ...', caplog.record_tuples) assert log_has('Parameter --disable-max-market-positions detected ...', caplog)
assert log_has('max_open_trades set to unlimited ...', caplog.record_tuples) assert log_has('max_open_trades set to unlimited ...', caplog)
assert 'refresh_pairs' in config assert 'refresh_pairs' in config
assert log_has('Parameter -r/--refresh-pairs-cached detected ...', caplog.record_tuples) assert log_has('Parameter -r/--refresh-pairs-cached detected ...', caplog)
assert 'timerange' in config assert 'timerange' in config
assert log_has( assert log_has('Parameter --timerange detected: {} ...'.format(config['timerange']), caplog)
'Parameter --timerange detected: {} ...'.format(config['timerange']),
caplog.record_tuples
)
assert 'epochs' in config assert 'epochs' in config
assert log_has('Parameter --epochs detected ... Will run Hyperopt with for 1000 epochs ...', assert log_has('Parameter --epochs detected ... Will run Hyperopt with for 1000 epochs ...',
caplog.record_tuples) caplog)
assert 'spaces' in config assert 'spaces' in config
assert log_has( assert log_has('Parameter -s/--spaces detected: {}'.format(config['spaces']), caplog)
'Parameter -s/--spaces detected: {}'.format(config['spaces']),
caplog.record_tuples
)
assert 'print_all' in config assert 'print_all' in config
assert log_has('Parameter --print-all detected ...', caplog.record_tuples) assert log_has('Parameter --print-all detected ...', caplog)
def test_hyperoptresolver(mocker, default_conf, caplog) -> None: def test_hyperoptresolver(mocker, default_conf, caplog) -> None:
@ -184,9 +172,9 @@ def test_hyperoptresolver(mocker, default_conf, caplog) -> None:
assert not hasattr(x, 'populate_buy_trend') assert not hasattr(x, 'populate_buy_trend')
assert not hasattr(x, 'populate_sell_trend') assert not hasattr(x, 'populate_sell_trend')
assert log_has("Custom Hyperopt does not provide populate_sell_trend. " assert log_has("Custom Hyperopt does not provide populate_sell_trend. "
"Using populate_sell_trend from DefaultStrategy.", caplog.record_tuples) "Using populate_sell_trend from DefaultStrategy.", caplog)
assert log_has("Custom Hyperopt does not provide populate_buy_trend. " assert log_has("Custom Hyperopt does not provide populate_buy_trend. "
"Using populate_buy_trend from DefaultStrategy.", caplog.record_tuples) "Using populate_buy_trend from DefaultStrategy.", caplog)
assert hasattr(x, "ticker_interval") assert hasattr(x, "ticker_interval")
@ -232,10 +220,7 @@ def test_start(mocker, default_conf, caplog) -> None:
import pprint import pprint
pprint.pprint(caplog.record_tuples) pprint.pprint(caplog.record_tuples)
assert log_has( assert log_has('Starting freqtrade in Hyperopt mode', caplog)
'Starting freqtrade in Hyperopt mode',
caplog.record_tuples
)
assert start_mock.call_count == 1 assert start_mock.call_count == 1
@ -260,7 +245,7 @@ def test_start_no_data(mocker, default_conf, caplog) -> None:
import pprint import pprint
pprint.pprint(caplog.record_tuples) pprint.pprint(caplog.record_tuples)
assert log_has('No data found. Terminating.', caplog.record_tuples) assert log_has('No data found. Terminating.', caplog)
def test_start_failure(mocker, default_conf, caplog) -> None: def test_start_failure(mocker, default_conf, caplog) -> None:
@ -278,10 +263,7 @@ def test_start_failure(mocker, default_conf, caplog) -> None:
args = get_args(args) args = get_args(args)
with pytest.raises(DependencyException): with pytest.raises(DependencyException):
start_hyperopt(args) start_hyperopt(args)
assert log_has( assert log_has("Please don't use --strategy for hyperopt.", caplog)
"Please don't use --strategy for hyperopt.",
caplog.record_tuples
)
def test_start_filelock(mocker, default_conf, caplog) -> None: def test_start_filelock(mocker, default_conf, caplog) -> None:
@ -297,10 +279,7 @@ def test_start_filelock(mocker, default_conf, caplog) -> None:
] ]
args = get_args(args) args = get_args(args)
start_hyperopt(args) start_hyperopt(args)
assert log_has( assert log_has("Another running instance of freqtrade Hyperopt detected.", caplog)
"Another running instance of freqtrade Hyperopt detected.",
caplog.record_tuples
)
def test_loss_calculation_prefer_correct_trade_count(default_conf, hyperopt_results) -> None: def test_loss_calculation_prefer_correct_trade_count(default_conf, hyperopt_results) -> None:
@ -404,10 +383,7 @@ def test_save_trials_saves_trials(mocker, hyperopt, caplog) -> None:
hyperopt.save_trials() hyperopt.save_trials()
trials_file = os.path.join('freqtrade', 'tests', 'optimize', 'ut_trials.pickle') trials_file = os.path.join('freqtrade', 'tests', 'optimize', 'ut_trials.pickle')
assert log_has( assert log_has('Saving 1 evaluations to \'{}\''.format(trials_file), caplog)
'Saving 1 evaluations to \'{}\''.format(trials_file),
caplog.record_tuples
)
mock_dump.assert_called_once() mock_dump.assert_called_once()
@ -416,10 +392,7 @@ def test_read_trials_returns_trials_file(mocker, hyperopt, caplog) -> None:
mock_load = mocker.patch('freqtrade.optimize.hyperopt.load', return_value=trials) mock_load = mocker.patch('freqtrade.optimize.hyperopt.load', return_value=trials)
hyperopt_trial = hyperopt.read_trials() hyperopt_trial = hyperopt.read_trials()
trials_file = os.path.join('freqtrade', 'tests', 'optimize', 'ut_trials.pickle') trials_file = os.path.join('freqtrade', 'tests', 'optimize', 'ut_trials.pickle')
assert log_has( assert log_has('Reading Trials from \'{}\''.format(trials_file), caplog)
'Reading Trials from \'{}\''.format(trials_file),
caplog.record_tuples
)
assert hyperopt_trial == trials assert hyperopt_trial == trials
mock_load.assert_called_once() mock_load.assert_called_once()
@ -608,7 +581,8 @@ def test_generate_optimizer(mocker, default_conf) -> None:
'loss': 1.9840569076926293, 'loss': 1.9840569076926293,
'results_explanation': ' 1 trades. Avg profit 2.31%. Total profit 0.00023300 BTC ' 'results_explanation': ' 1 trades. Avg profit 2.31%. Total profit 0.00023300 BTC '
'( 2.31Σ%). Avg duration 100.0 mins.', '( 2.31Σ%). Avg duration 100.0 mins.',
'params': optimizer_param 'params': optimizer_param,
'total_profit': 0.00023300
} }
hyperopt = Hyperopt(default_conf) hyperopt = Hyperopt(default_conf)
@ -629,7 +603,7 @@ def test_clean_hyperopt(mocker, default_conf, caplog):
h = Hyperopt(default_conf) h = Hyperopt(default_conf)
assert unlinkmock.call_count == 2 assert unlinkmock.call_count == 2
assert log_has(f"Removing `{h.tickerdata_pickle}`.", caplog.record_tuples) assert log_has(f"Removing `{h.tickerdata_pickle}`.", caplog)
def test_continue_hyperopt(mocker, default_conf, caplog): def test_continue_hyperopt(mocker, default_conf, caplog):
@ -646,4 +620,78 @@ def test_continue_hyperopt(mocker, default_conf, caplog):
Hyperopt(default_conf) Hyperopt(default_conf)
assert unlinkmock.call_count == 0 assert unlinkmock.call_count == 0
assert log_has(f"Continuing on previous hyperopt results.", caplog.record_tuples) assert log_has(f"Continuing on previous hyperopt results.", caplog)
def test_print_json_spaces_all(mocker, default_conf, caplog, capsys) -> None:
dumper = mocker.patch('freqtrade.optimize.hyperopt.dump', MagicMock())
mocker.patch('freqtrade.optimize.hyperopt.load_data', MagicMock())
mocker.patch(
'freqtrade.optimize.hyperopt.get_timeframe',
MagicMock(return_value=(datetime(2017, 12, 10), datetime(2017, 12, 13)))
)
parallel = mocker.patch(
'freqtrade.optimize.hyperopt.Hyperopt.run_optimizer_parallel',
MagicMock(return_value=[{'loss': 1, 'results_explanation': 'foo result', 'params': {}}])
)
patch_exchange(mocker)
default_conf.update({'config': 'config.json.example',
'epochs': 1,
'timerange': None,
'spaces': 'all',
'hyperopt_jobs': 1,
'print_json': True,
})
hyperopt = Hyperopt(default_conf)
hyperopt.strategy.tickerdata_to_dataframe = MagicMock()
hyperopt.custom_hyperopt.generate_roi_table = MagicMock(return_value={})
hyperopt.start()
parallel.assert_called_once()
out, err = capsys.readouterr()
assert '{"params":{"mfi-value":null,"fastd-value":null,"adx-value":null,"rsi-value":null,"mfi-enabled":null,"fastd-enabled":null,"adx-enabled":null,"rsi-enabled":null,"trigger":null,"sell-mfi-value":null,"sell-fastd-value":null,"sell-adx-value":null,"sell-rsi-value":null,"sell-mfi-enabled":null,"sell-fastd-enabled":null,"sell-adx-enabled":null,"sell-rsi-enabled":null,"sell-trigger":null},"minimal_roi":{},"stoploss":null}' in out # noqa: E501
assert dumper.called
# Should be called twice, once for tickerdata, once to save evaluations
assert dumper.call_count == 2
def test_print_json_spaces_roi_stoploss(mocker, default_conf, caplog, capsys) -> None:
dumper = mocker.patch('freqtrade.optimize.hyperopt.dump', MagicMock())
mocker.patch('freqtrade.optimize.hyperopt.load_data', MagicMock())
mocker.patch(
'freqtrade.optimize.hyperopt.get_timeframe',
MagicMock(return_value=(datetime(2017, 12, 10), datetime(2017, 12, 13)))
)
parallel = mocker.patch(
'freqtrade.optimize.hyperopt.Hyperopt.run_optimizer_parallel',
MagicMock(return_value=[{'loss': 1, 'results_explanation': 'foo result', 'params': {}}])
)
patch_exchange(mocker)
default_conf.update({'config': 'config.json.example',
'epochs': 1,
'timerange': None,
'spaces': 'roi stoploss',
'hyperopt_jobs': 1,
'print_json': True,
})
hyperopt = Hyperopt(default_conf)
hyperopt.strategy.tickerdata_to_dataframe = MagicMock()
hyperopt.custom_hyperopt.generate_roi_table = MagicMock(return_value={})
hyperopt.start()
parallel.assert_called_once()
out, err = capsys.readouterr()
assert '{"minimal_roi":{},"stoploss":null}' in out
assert dumper.called
# Should be called twice, once for tickerdata, once to save evaluations
assert dumper.call_count == 2

View File

@ -91,7 +91,7 @@ def test_fiat_convert_unsupported_crypto(mocker, caplog):
mocker.patch('freqtrade.rpc.fiat_convert.CryptoToFiatConverter._cryptomap', return_value=[]) mocker.patch('freqtrade.rpc.fiat_convert.CryptoToFiatConverter._cryptomap', return_value=[])
fiat_convert = CryptoToFiatConverter() fiat_convert = CryptoToFiatConverter()
assert fiat_convert._find_price(crypto_symbol='CRYPTO_123', fiat_symbol='EUR') == 0.0 assert fiat_convert._find_price(crypto_symbol='CRYPTO_123', fiat_symbol='EUR') == 0.0
assert log_has('unsupported crypto-symbol CRYPTO_123 - returning 0.0', caplog.record_tuples) assert log_has('unsupported crypto-symbol CRYPTO_123 - returning 0.0', caplog)
def test_fiat_convert_get_price(mocker): def test_fiat_convert_get_price(mocker):
@ -190,7 +190,7 @@ def test_fiat_invalid_response(mocker, caplog):
length_cryptomap = len(fiat_convert._cryptomap) length_cryptomap = len(fiat_convert._cryptomap)
assert length_cryptomap == 0 assert length_cryptomap == 0
assert log_has('Could not load FIAT Cryptocurrency map for the following problem: TypeError', assert log_has('Could not load FIAT Cryptocurrency map for the following problem: TypeError',
caplog.record_tuples) caplog)
def test_convert_amount(mocker): def test_convert_amount(mocker):

View File

@ -44,7 +44,7 @@ def test_rpc_trade_status(default_conf, ticker, fee, markets, mocker) -> None:
with pytest.raises(RPCException, match=r'.*no active trade*'): with pytest.raises(RPCException, match=r'.*no active trade*'):
rpc._rpc_trade_status() rpc._rpc_trade_status()
freqtradebot.create_trade() freqtradebot.create_trades()
results = rpc._rpc_trade_status() results = rpc._rpc_trade_status()
assert { assert {
'trade_id': 1, 'trade_id': 1,
@ -116,7 +116,7 @@ def test_rpc_status_table(default_conf, ticker, fee, markets, mocker) -> None:
with pytest.raises(RPCException, match=r'.*no active order*'): with pytest.raises(RPCException, match=r'.*no active order*'):
rpc._rpc_status_table() rpc._rpc_status_table()
freqtradebot.create_trade() freqtradebot.create_trades()
result = rpc._rpc_status_table() result = rpc._rpc_status_table()
assert 'instantly' in result['Since'].all() assert 'instantly' in result['Since'].all()
assert 'ETH/BTC' in result['Pair'].all() assert 'ETH/BTC' in result['Pair'].all()
@ -151,7 +151,7 @@ def test_rpc_daily_profit(default_conf, update, ticker, fee,
rpc = RPC(freqtradebot) rpc = RPC(freqtradebot)
rpc._fiat_converter = CryptoToFiatConverter() rpc._fiat_converter = CryptoToFiatConverter()
# Create some test data # Create some test data
freqtradebot.create_trade() freqtradebot.create_trades()
trade = Trade.query.first() trade = Trade.query.first()
assert trade assert trade
@ -208,7 +208,7 @@ def test_rpc_trade_statistics(default_conf, ticker, ticker_sell_up, fee,
rpc._rpc_trade_statistics(stake_currency, fiat_display_currency) rpc._rpc_trade_statistics(stake_currency, fiat_display_currency)
# Create some test data # Create some test data
freqtradebot.create_trade() freqtradebot.create_trades()
trade = Trade.query.first() trade = Trade.query.first()
# Simulate fulfilled LIMIT_BUY order for trade # Simulate fulfilled LIMIT_BUY order for trade
trade.update(limit_buy_order) trade.update(limit_buy_order)
@ -222,7 +222,7 @@ def test_rpc_trade_statistics(default_conf, ticker, ticker_sell_up, fee,
trade.close_date = datetime.utcnow() trade.close_date = datetime.utcnow()
trade.is_open = False trade.is_open = False
freqtradebot.create_trade() freqtradebot.create_trades()
trade = Trade.query.first() trade = Trade.query.first()
# Simulate fulfilled LIMIT_BUY order for trade # Simulate fulfilled LIMIT_BUY order for trade
trade.update(limit_buy_order) trade.update(limit_buy_order)
@ -292,7 +292,7 @@ def test_rpc_trade_statistics_closed(mocker, default_conf, ticker, fee, markets,
rpc = RPC(freqtradebot) rpc = RPC(freqtradebot)
# Create some test data # Create some test data
freqtradebot.create_trade() freqtradebot.create_trades()
trade = Trade.query.first() trade = Trade.query.first()
# Simulate fulfilled LIMIT_BUY order for trade # Simulate fulfilled LIMIT_BUY order for trade
trade.update(limit_buy_order) trade.update(limit_buy_order)
@ -536,7 +536,7 @@ def test_rpc_forcesell(default_conf, ticker, fee, mocker, markets) -> None:
msg = rpc._rpc_forcesell('all') msg = rpc._rpc_forcesell('all')
assert msg == {'result': 'Created sell orders for all open trades.'} assert msg == {'result': 'Created sell orders for all open trades.'}
freqtradebot.create_trade() freqtradebot.create_trades()
msg = rpc._rpc_forcesell('all') msg = rpc._rpc_forcesell('all')
assert msg == {'result': 'Created sell orders for all open trades.'} assert msg == {'result': 'Created sell orders for all open trades.'}
@ -570,7 +570,7 @@ def test_rpc_forcesell(default_conf, ticker, fee, mocker, markets) -> None:
assert cancel_order_mock.call_count == 1 assert cancel_order_mock.call_count == 1
assert trade.amount == filled_amount assert trade.amount == filled_amount
freqtradebot.create_trade() freqtradebot.create_trades()
trade = Trade.query.filter(Trade.id == '2').first() trade = Trade.query.filter(Trade.id == '2').first()
amount = trade.amount amount = trade.amount
# make an limit-buy open trade, if there is no 'filled', don't sell it # make an limit-buy open trade, if there is no 'filled', don't sell it
@ -589,7 +589,7 @@ def test_rpc_forcesell(default_conf, ticker, fee, mocker, markets) -> None:
assert cancel_order_mock.call_count == 2 assert cancel_order_mock.call_count == 2
assert trade.amount == amount assert trade.amount == amount
freqtradebot.create_trade() freqtradebot.create_trades()
# make an limit-sell open trade # make an limit-sell open trade
mocker.patch( mocker.patch(
'freqtrade.exchange.Exchange.get_order', 'freqtrade.exchange.Exchange.get_order',
@ -622,7 +622,7 @@ def test_performance_handle(default_conf, ticker, limit_buy_order, fee,
rpc = RPC(freqtradebot) rpc = RPC(freqtradebot)
# Create some test data # Create some test data
freqtradebot.create_trade() freqtradebot.create_trades()
trade = Trade.query.first() trade = Trade.query.first()
assert trade assert trade
@ -660,7 +660,7 @@ def test_rpc_count(mocker, default_conf, ticker, fee, markets) -> None:
assert counts["current"] == 0 assert counts["current"] == 0
# Create some test data # Create some test data
freqtradebot.create_trade() freqtradebot.create_trades()
counts = rpc._rpc_count() counts = rpc._rpc_count()
assert counts["current"] == 1 assert counts["current"] == 1

View File

@ -148,8 +148,8 @@ def test_api_run(default_conf, mocker, caplog):
assert isinstance(server_mock.call_args_list[0][0][2], Flask) assert isinstance(server_mock.call_args_list[0][0][2], Flask)
assert hasattr(apiserver, "srv") assert hasattr(apiserver, "srv")
assert log_has("Starting HTTP Server at 127.0.0.1:8080", caplog.record_tuples) assert log_has("Starting HTTP Server at 127.0.0.1:8080", caplog)
assert log_has("Starting Local Rest Server.", caplog.record_tuples) assert log_has("Starting Local Rest Server.", caplog)
# Test binding to public # Test binding to public
caplog.clear() caplog.clear()
@ -165,22 +165,20 @@ def test_api_run(default_conf, mocker, caplog):
assert server_mock.call_args_list[0][0][0] == "0.0.0.0" assert server_mock.call_args_list[0][0][0] == "0.0.0.0"
assert server_mock.call_args_list[0][0][1] == "8089" assert server_mock.call_args_list[0][0][1] == "8089"
assert isinstance(server_mock.call_args_list[0][0][2], Flask) assert isinstance(server_mock.call_args_list[0][0][2], Flask)
assert log_has("Starting HTTP Server at 0.0.0.0:8089", caplog.record_tuples) assert log_has("Starting HTTP Server at 0.0.0.0:8089", caplog)
assert log_has("Starting Local Rest Server.", caplog.record_tuples) assert log_has("Starting Local Rest Server.", caplog)
assert log_has("SECURITY WARNING - Local Rest Server listening to external connections", assert log_has("SECURITY WARNING - Local Rest Server listening to external connections",
caplog.record_tuples) caplog)
assert log_has("SECURITY WARNING - This is insecure please set to your loopback," assert log_has("SECURITY WARNING - This is insecure please set to your loopback,"
"e.g 127.0.0.1 in config.json", "e.g 127.0.0.1 in config.json", caplog)
caplog.record_tuples)
assert log_has("SECURITY WARNING - No password for local REST Server defined. " assert log_has("SECURITY WARNING - No password for local REST Server defined. "
"Please make sure that this is intentional!", "Please make sure that this is intentional!", caplog)
caplog.record_tuples)
# Test crashing flask # Test crashing flask
caplog.clear() caplog.clear()
mocker.patch('freqtrade.rpc.api_server.make_server', MagicMock(side_effect=Exception)) mocker.patch('freqtrade.rpc.api_server.make_server', MagicMock(side_effect=Exception))
apiserver.run() apiserver.run()
assert log_has("Api server failed to start.", caplog.record_tuples) assert log_has("Api server failed to start.", caplog)
def test_api_cleanup(default_conf, mocker, caplog): def test_api_cleanup(default_conf, mocker, caplog):
@ -199,7 +197,7 @@ def test_api_cleanup(default_conf, mocker, caplog):
apiserver.cleanup() apiserver.cleanup()
assert stop_mock.shutdown.call_count == 1 assert stop_mock.shutdown.call_count == 1
assert log_has("Stopping API Server", caplog.record_tuples) assert log_has("Stopping API Server", caplog)
def test_api_reloadconf(botclient): def test_api_reloadconf(botclient):
@ -277,7 +275,7 @@ def test_api_count(botclient, mocker, ticker, fee, markets):
assert rc.json["max"] == 1.0 assert rc.json["max"] == 1.0
# Create some test data # Create some test data
ftbot.create_trade() ftbot.create_trades()
rc = client_get(client, f"{BASE_URI}/count") rc = client_get(client, f"{BASE_URI}/count")
assert_response(rc) assert_response(rc)
assert rc.json["current"] == 1.0 assert rc.json["current"] == 1.0
@ -331,7 +329,7 @@ def test_api_profit(botclient, mocker, ticker, fee, markets, limit_buy_order, li
assert len(rc.json) == 1 assert len(rc.json) == 1
assert rc.json == {"error": "Error querying _profit: no closed trade"} assert rc.json == {"error": "Error querying _profit: no closed trade"}
ftbot.create_trade() ftbot.create_trades()
trade = Trade.query.first() trade = Trade.query.first()
# Simulate fulfilled LIMIT_BUY order for trade # Simulate fulfilled LIMIT_BUY order for trade
@ -420,7 +418,7 @@ def test_api_status(botclient, mocker, ticker, fee, markets):
assert_response(rc, 502) assert_response(rc, 502)
assert rc.json == {'error': 'Error querying _status: no active trade'} assert rc.json == {'error': 'Error querying _status: no active trade'}
ftbot.create_trade() ftbot.create_trades()
rc = client_get(client, f"{BASE_URI}/status") rc = client_get(client, f"{BASE_URI}/status")
assert_response(rc) assert_response(rc)
assert len(rc.json) == 1 assert len(rc.json) == 1
@ -550,7 +548,7 @@ def test_api_forcesell(botclient, mocker, ticker, fee, markets):
assert_response(rc, 502) assert_response(rc, 502)
assert rc.json == {"error": "Error querying _forcesell: invalid argument"} assert rc.json == {"error": "Error querying _forcesell: invalid argument"}
ftbot.create_trade() ftbot.create_trades()
rc = client_post(client, f"{BASE_URI}/forcesell", rc = client_post(client, f"{BASE_URI}/forcesell",
data='{"tradeid": "1"}') data='{"tradeid": "1"}')

View File

@ -19,7 +19,7 @@ def test_init_telegram_disabled(mocker, default_conf, caplog) -> None:
default_conf['telegram']['enabled'] = False default_conf['telegram']['enabled'] = False
rpc_manager = RPCManager(get_patched_freqtradebot(mocker, default_conf)) rpc_manager = RPCManager(get_patched_freqtradebot(mocker, default_conf))
assert not log_has('Enabling rpc.telegram ...', caplog.record_tuples) assert not log_has('Enabling rpc.telegram ...', caplog)
assert rpc_manager.registered_modules == [] assert rpc_manager.registered_modules == []
@ -28,7 +28,7 @@ def test_init_telegram_enabled(mocker, default_conf, caplog) -> None:
mocker.patch('freqtrade.rpc.telegram.Telegram._init', MagicMock()) mocker.patch('freqtrade.rpc.telegram.Telegram._init', MagicMock())
rpc_manager = RPCManager(get_patched_freqtradebot(mocker, default_conf)) rpc_manager = RPCManager(get_patched_freqtradebot(mocker, default_conf))
assert log_has('Enabling rpc.telegram ...', caplog.record_tuples) assert log_has('Enabling rpc.telegram ...', caplog)
len_modules = len(rpc_manager.registered_modules) len_modules = len(rpc_manager.registered_modules)
assert len_modules == 1 assert len_modules == 1
assert 'telegram' in [mod.name for mod in rpc_manager.registered_modules] assert 'telegram' in [mod.name for mod in rpc_manager.registered_modules]
@ -43,7 +43,7 @@ def test_cleanup_telegram_disabled(mocker, default_conf, caplog) -> None:
rpc_manager = RPCManager(freqtradebot) rpc_manager = RPCManager(freqtradebot)
rpc_manager.cleanup() rpc_manager.cleanup()
assert not log_has('Cleaning up rpc.telegram ...', caplog.record_tuples) assert not log_has('Cleaning up rpc.telegram ...', caplog)
assert telegram_mock.call_count == 0 assert telegram_mock.call_count == 0
@ -59,7 +59,7 @@ def test_cleanup_telegram_enabled(mocker, default_conf, caplog) -> None:
assert 'telegram' in [mod.name for mod in rpc_manager.registered_modules] assert 'telegram' in [mod.name for mod in rpc_manager.registered_modules]
rpc_manager.cleanup() rpc_manager.cleanup()
assert log_has('Cleaning up rpc.telegram ...', caplog.record_tuples) assert log_has('Cleaning up rpc.telegram ...', caplog)
assert 'telegram' not in [mod.name for mod in rpc_manager.registered_modules] assert 'telegram' not in [mod.name for mod in rpc_manager.registered_modules]
assert telegram_mock.call_count == 1 assert telegram_mock.call_count == 1
@ -75,7 +75,7 @@ def test_send_msg_telegram_disabled(mocker, default_conf, caplog) -> None:
'status': 'test' 'status': 'test'
}) })
assert log_has("Sending rpc message: {'type': status, 'status': 'test'}", caplog.record_tuples) assert log_has("Sending rpc message: {'type': status, 'status': 'test'}", caplog)
assert telegram_mock.call_count == 0 assert telegram_mock.call_count == 0
@ -90,7 +90,7 @@ def test_send_msg_telegram_enabled(mocker, default_conf, caplog) -> None:
'status': 'test' 'status': 'test'
}) })
assert log_has("Sending rpc message: {'type': status, 'status': 'test'}", caplog.record_tuples) assert log_has("Sending rpc message: {'type': status, 'status': 'test'}", caplog)
assert telegram_mock.call_count == 1 assert telegram_mock.call_count == 1
@ -100,7 +100,7 @@ def test_init_webhook_disabled(mocker, default_conf, caplog) -> None:
default_conf['webhook'] = {'enabled': False} default_conf['webhook'] = {'enabled': False}
rpc_manager = RPCManager(get_patched_freqtradebot(mocker, default_conf)) rpc_manager = RPCManager(get_patched_freqtradebot(mocker, default_conf))
assert not log_has('Enabling rpc.webhook ...', caplog.record_tuples) assert not log_has('Enabling rpc.webhook ...', caplog)
assert rpc_manager.registered_modules == [] assert rpc_manager.registered_modules == []
@ -110,7 +110,7 @@ def test_init_webhook_enabled(mocker, default_conf, caplog) -> None:
default_conf['webhook'] = {'enabled': True, 'url': "https://DEADBEEF.com"} default_conf['webhook'] = {'enabled': True, 'url': "https://DEADBEEF.com"}
rpc_manager = RPCManager(get_patched_freqtradebot(mocker, default_conf)) rpc_manager = RPCManager(get_patched_freqtradebot(mocker, default_conf))
assert log_has('Enabling rpc.webhook ...', caplog.record_tuples) assert log_has('Enabling rpc.webhook ...', caplog)
assert len(rpc_manager.registered_modules) == 1 assert len(rpc_manager.registered_modules) == 1
assert 'webhook' in [mod.name for mod in rpc_manager.registered_modules] assert 'webhook' in [mod.name for mod in rpc_manager.registered_modules]
@ -144,7 +144,7 @@ def test_init_apiserver_disabled(mocker, default_conf, caplog) -> None:
default_conf['telegram']['enabled'] = False default_conf['telegram']['enabled'] = False
rpc_manager = RPCManager(get_patched_freqtradebot(mocker, default_conf)) rpc_manager = RPCManager(get_patched_freqtradebot(mocker, default_conf))
assert not log_has('Enabling rpc.api_server', caplog.record_tuples) assert not log_has('Enabling rpc.api_server', caplog)
assert rpc_manager.registered_modules == [] assert rpc_manager.registered_modules == []
assert run_mock.call_count == 0 assert run_mock.call_count == 0
@ -160,7 +160,7 @@ def test_init_apiserver_enabled(mocker, default_conf, caplog) -> None:
"listen_port": "8080"} "listen_port": "8080"}
rpc_manager = RPCManager(get_patched_freqtradebot(mocker, default_conf)) rpc_manager = RPCManager(get_patched_freqtradebot(mocker, default_conf))
assert log_has('Enabling rpc.api_server', caplog.record_tuples) assert log_has('Enabling rpc.api_server', caplog)
assert len(rpc_manager.registered_modules) == 1 assert len(rpc_manager.registered_modules) == 1
assert 'apiserver' in [mod.name for mod in rpc_manager.registered_modules] assert 'apiserver' in [mod.name for mod in rpc_manager.registered_modules]
assert run_mock.call_count == 1 assert run_mock.call_count == 1

View File

@ -76,7 +76,7 @@ def test_init(default_conf, mocker, caplog) -> None:
"['performance'], ['daily'], ['count'], ['reload_conf'], " \ "['performance'], ['daily'], ['count'], ['reload_conf'], " \
"['stopbuy'], ['whitelist'], ['blacklist'], ['edge'], ['help'], ['version']]" "['stopbuy'], ['whitelist'], ['blacklist'], ['edge'], ['help'], ['version']]"
assert log_has(message_str, caplog.record_tuples) assert log_has(message_str, caplog)
def test_cleanup(default_conf, mocker) -> None: def test_cleanup(default_conf, mocker) -> None:
@ -102,18 +102,9 @@ def test_authorized_only(default_conf, mocker, caplog) -> None:
dummy = DummyCls(bot) dummy = DummyCls(bot)
dummy.dummy_handler(bot=MagicMock(), update=update) dummy.dummy_handler(bot=MagicMock(), update=update)
assert dummy.state['called'] is True assert dummy.state['called'] is True
assert log_has( assert log_has('Executing handler: dummy_handler for chat_id: 0', caplog)
'Executing handler: dummy_handler for chat_id: 0', assert not log_has('Rejected unauthorized message from: 0', caplog)
caplog.record_tuples assert not log_has('Exception occurred within Telegram module', caplog)
)
assert not log_has(
'Rejected unauthorized message from: 0',
caplog.record_tuples
)
assert not log_has(
'Exception occurred within Telegram module',
caplog.record_tuples
)
def test_authorized_only_unauthorized(default_conf, mocker, caplog) -> None: def test_authorized_only_unauthorized(default_conf, mocker, caplog) -> None:
@ -128,18 +119,9 @@ def test_authorized_only_unauthorized(default_conf, mocker, caplog) -> None:
dummy = DummyCls(bot) dummy = DummyCls(bot)
dummy.dummy_handler(bot=MagicMock(), update=update) dummy.dummy_handler(bot=MagicMock(), update=update)
assert dummy.state['called'] is False assert dummy.state['called'] is False
assert not log_has( assert not log_has('Executing handler: dummy_handler for chat_id: 3735928559', caplog)
'Executing handler: dummy_handler for chat_id: 3735928559', assert log_has('Rejected unauthorized message from: 3735928559', caplog)
caplog.record_tuples assert not log_has('Exception occurred within Telegram module', caplog)
)
assert log_has(
'Rejected unauthorized message from: 3735928559',
caplog.record_tuples
)
assert not log_has(
'Exception occurred within Telegram module',
caplog.record_tuples
)
def test_authorized_only_exception(default_conf, mocker, caplog) -> None: def test_authorized_only_exception(default_conf, mocker, caplog) -> None:
@ -156,18 +138,9 @@ def test_authorized_only_exception(default_conf, mocker, caplog) -> None:
dummy.dummy_exception(bot=MagicMock(), update=update) dummy.dummy_exception(bot=MagicMock(), update=update)
assert dummy.state['called'] is False assert dummy.state['called'] is False
assert not log_has( assert not log_has('Executing handler: dummy_handler for chat_id: 0', caplog)
'Executing handler: dummy_handler for chat_id: 0', assert not log_has('Rejected unauthorized message from: 0', caplog)
caplog.record_tuples assert log_has('Exception occurred within Telegram module', caplog)
)
assert not log_has(
'Rejected unauthorized message from: 0',
caplog.record_tuples
)
assert log_has(
'Exception occurred within Telegram module',
caplog.record_tuples
)
def test_status(default_conf, update, mocker, fee, ticker, markets) -> None: def test_status(default_conf, update, mocker, fee, ticker, markets) -> None:
@ -219,7 +192,7 @@ def test_status(default_conf, update, mocker, fee, ticker, markets) -> None:
# Create some test data # Create some test data
for _ in range(3): for _ in range(3):
freqtradebot.create_trade() freqtradebot.create_trades()
telegram._status(bot=MagicMock(), update=update) telegram._status(bot=MagicMock(), update=update)
assert msg_mock.call_count == 1 assert msg_mock.call_count == 1
@ -267,7 +240,7 @@ def test_status_handle(default_conf, update, ticker, fee, markets, mocker) -> No
msg_mock.reset_mock() msg_mock.reset_mock()
# Create some test data # Create some test data
freqtradebot.create_trade() freqtradebot.create_trades()
# Trigger status while we have a fulfilled order for the open trade # Trigger status while we have a fulfilled order for the open trade
telegram._status(bot=MagicMock(), update=update) telegram._status(bot=MagicMock(), update=update)
@ -319,7 +292,7 @@ def test_status_table_handle(default_conf, update, ticker, fee, markets, mocker)
msg_mock.reset_mock() msg_mock.reset_mock()
# Create some test data # Create some test data
freqtradebot.create_trade() freqtradebot.create_trades()
telegram._status_table(bot=MagicMock(), update=update) telegram._status_table(bot=MagicMock(), update=update)
@ -335,6 +308,7 @@ def test_status_table_handle(default_conf, update, ticker, fee, markets, mocker)
def test_daily_handle(default_conf, update, ticker, limit_buy_order, fee, def test_daily_handle(default_conf, update, ticker, limit_buy_order, fee,
limit_sell_order, markets, mocker) -> None: limit_sell_order, markets, mocker) -> None:
patch_exchange(mocker) patch_exchange(mocker)
default_conf['max_open_trades'] = 1
mocker.patch( mocker.patch(
'freqtrade.rpc.rpc.CryptoToFiatConverter._find_price', 'freqtrade.rpc.rpc.CryptoToFiatConverter._find_price',
return_value=15000.0 return_value=15000.0
@ -358,7 +332,7 @@ def test_daily_handle(default_conf, update, ticker, limit_buy_order, fee,
telegram = Telegram(freqtradebot) telegram = Telegram(freqtradebot)
# Create some test data # Create some test data
freqtradebot.create_trade() freqtradebot.create_trades()
trade = Trade.query.first() trade = Trade.query.first()
assert trade assert trade
@ -384,9 +358,9 @@ def test_daily_handle(default_conf, update, ticker, limit_buy_order, fee,
# Reset msg_mock # Reset msg_mock
msg_mock.reset_mock() msg_mock.reset_mock()
freqtradebot.config['max_open_trades'] = 2
# Add two other trades # Add two other trades
freqtradebot.create_trade() freqtradebot.create_trades()
freqtradebot.create_trade()
trades = Trade.query.all() trades = Trade.query.all()
for trade in trades: for trade in trades:
@ -465,7 +439,7 @@ def test_profit_handle(default_conf, update, ticker, ticker_sell_up, fee,
msg_mock.reset_mock() msg_mock.reset_mock()
# Create some test data # Create some test data
freqtradebot.create_trade() freqtradebot.create_trades()
trade = Trade.query.first() trade = Trade.query.first()
# Simulate fulfilled LIMIT_BUY order for trade # Simulate fulfilled LIMIT_BUY order for trade
@ -760,7 +734,7 @@ def test_forcesell_handle(default_conf, update, ticker, fee,
telegram = Telegram(freqtradebot) telegram = Telegram(freqtradebot)
# Create some test data # Create some test data
freqtradebot.create_trade() freqtradebot.create_trades()
trade = Trade.query.first() trade = Trade.query.first()
assert trade assert trade
@ -811,7 +785,7 @@ def test_forcesell_down_handle(default_conf, update, ticker, fee,
telegram = Telegram(freqtradebot) telegram = Telegram(freqtradebot)
# Create some test data # Create some test data
freqtradebot.create_trade() freqtradebot.create_trades()
# Decrease the price and sell it # Decrease the price and sell it
mocker.patch.multiple( mocker.patch.multiple(
@ -859,14 +833,13 @@ def test_forcesell_all_handle(default_conf, update, ticker, fee, markets, mocker
markets=PropertyMock(return_value=markets), markets=PropertyMock(return_value=markets),
validate_pairs=MagicMock(return_value={}) validate_pairs=MagicMock(return_value={})
) )
default_conf['max_open_trades'] = 4
freqtradebot = FreqtradeBot(default_conf) freqtradebot = FreqtradeBot(default_conf)
patch_get_signal(freqtradebot, (True, False)) patch_get_signal(freqtradebot, (True, False))
telegram = Telegram(freqtradebot) telegram = Telegram(freqtradebot)
# Create some test data # Create some test data
for _ in range(4): freqtradebot.create_trades()
freqtradebot.create_trade()
rpc_mock.reset_mock() rpc_mock.reset_mock()
update.message.text = '/forcesell all' update.message.text = '/forcesell all'
@ -1010,7 +983,7 @@ def test_performance_handle(default_conf, update, ticker, fee,
telegram = Telegram(freqtradebot) telegram = Telegram(freqtradebot)
# Create some test data # Create some test data
freqtradebot.create_trade() freqtradebot.create_trades()
trade = Trade.query.first() trade = Trade.query.first()
assert trade assert trade
@ -1055,7 +1028,7 @@ def test_count_handle(default_conf, update, ticker, fee, markets, mocker) -> Non
freqtradebot.state = State.RUNNING freqtradebot.state = State.RUNNING
# Create some test data # Create some test data
freqtradebot.create_trade() freqtradebot.create_trades()
msg_mock.reset_mock() msg_mock.reset_mock()
telegram._count(bot=MagicMock(), update=update) telegram._count(bot=MagicMock(), update=update)
@ -1440,7 +1413,4 @@ def test__send_msg_network_error(default_conf, mocker, caplog) -> None:
# Bot should've tried to send it twice # Bot should've tried to send it twice
assert len(bot.method_calls) == 2 assert len(bot.method_calls) == 2
assert log_has( assert log_has('Telegram NetworkError: Oh snap! Trying one more time.', caplog)
'Telegram NetworkError: Oh snap! Trying one more time.',
caplog.record_tuples
)

View File

@ -115,7 +115,7 @@ def test_exception_send_msg(default_conf, mocker, caplog):
webhook = Webhook(get_patched_freqtradebot(mocker, default_conf)) webhook = Webhook(get_patched_freqtradebot(mocker, default_conf))
webhook.send_msg({'type': RPCMessageType.BUY_NOTIFICATION}) webhook.send_msg({'type': RPCMessageType.BUY_NOTIFICATION})
assert log_has(f"Message type {RPCMessageType.BUY_NOTIFICATION} not configured for webhooks", assert log_has(f"Message type {RPCMessageType.BUY_NOTIFICATION} not configured for webhooks",
caplog.record_tuples) caplog)
default_conf["webhook"] = get_webhook_dict() default_conf["webhook"] = get_webhook_dict()
default_conf["webhook"]["webhookbuy"]["value1"] = "{DEADBEEF:8f}" default_conf["webhook"]["webhookbuy"]["value1"] = "{DEADBEEF:8f}"
@ -135,7 +135,7 @@ def test_exception_send_msg(default_conf, mocker, caplog):
} }
webhook.send_msg(msg) webhook.send_msg(msg)
assert log_has("Problem calling Webhook. Please check your webhook configuration. " assert log_has("Problem calling Webhook. Please check your webhook configuration. "
"Exception: 'DEADBEEF'", caplog.record_tuples) "Exception: 'DEADBEEF'", caplog)
msg_mock = MagicMock() msg_mock = MagicMock()
mocker.patch("freqtrade.rpc.webhook.Webhook._send_msg", msg_mock) mocker.patch("freqtrade.rpc.webhook.Webhook._send_msg", msg_mock)
@ -164,4 +164,4 @@ def test__send_msg(default_conf, mocker, caplog):
post = MagicMock(side_effect=RequestException) post = MagicMock(side_effect=RequestException)
mocker.patch("freqtrade.rpc.webhook.post", post) mocker.patch("freqtrade.rpc.webhook.post", post)
webhook._send_msg(msg) webhook._send_msg(msg)
assert log_has('Could not call webhook url. Exception: ', caplog.record_tuples) assert log_has('Could not call webhook url. Exception: ', caplog)

View File

@ -49,12 +49,12 @@ def test_returns_latest_sell_signal(mocker, default_conf, ticker_history):
def test_get_signal_empty(default_conf, mocker, caplog): def test_get_signal_empty(default_conf, mocker, caplog):
assert (False, False) == _STRATEGY.get_signal('foo', default_conf['ticker_interval'], assert (False, False) == _STRATEGY.get_signal('foo', default_conf['ticker_interval'],
DataFrame()) DataFrame())
assert log_has('Empty ticker history for pair foo', caplog.record_tuples) assert log_has('Empty ticker history for pair foo', caplog)
caplog.clear() caplog.clear()
assert (False, False) == _STRATEGY.get_signal('bar', default_conf['ticker_interval'], assert (False, False) == _STRATEGY.get_signal('bar', default_conf['ticker_interval'],
[]) [])
assert log_has('Empty ticker history for pair bar', caplog.record_tuples) assert log_has('Empty ticker history for pair bar', caplog)
def test_get_signal_exception_valueerror(default_conf, mocker, caplog, ticker_history): def test_get_signal_exception_valueerror(default_conf, mocker, caplog, ticker_history):
@ -65,7 +65,7 @@ def test_get_signal_exception_valueerror(default_conf, mocker, caplog, ticker_hi
) )
assert (False, False) == _STRATEGY.get_signal('foo', default_conf['ticker_interval'], assert (False, False) == _STRATEGY.get_signal('foo', default_conf['ticker_interval'],
ticker_history) ticker_history)
assert log_has('Unable to analyze ticker for pair foo: xyz', caplog.record_tuples) assert log_has('Unable to analyze ticker for pair foo: xyz', caplog)
def test_get_signal_empty_dataframe(default_conf, mocker, caplog, ticker_history): def test_get_signal_empty_dataframe(default_conf, mocker, caplog, ticker_history):
@ -76,7 +76,7 @@ def test_get_signal_empty_dataframe(default_conf, mocker, caplog, ticker_history
) )
assert (False, False) == _STRATEGY.get_signal('xyz', default_conf['ticker_interval'], assert (False, False) == _STRATEGY.get_signal('xyz', default_conf['ticker_interval'],
ticker_history) ticker_history)
assert log_has('Empty dataframe for pair xyz', caplog.record_tuples) assert log_has('Empty dataframe for pair xyz', caplog)
def test_get_signal_old_dataframe(default_conf, mocker, caplog, ticker_history): def test_get_signal_old_dataframe(default_conf, mocker, caplog, ticker_history):
@ -91,10 +91,7 @@ def test_get_signal_old_dataframe(default_conf, mocker, caplog, ticker_history):
) )
assert (False, False) == _STRATEGY.get_signal('xyz', default_conf['ticker_interval'], assert (False, False) == _STRATEGY.get_signal('xyz', default_conf['ticker_interval'],
ticker_history) ticker_history)
assert log_has( assert log_has('Outdated history for pair xyz. Last tick is 16 minutes old', caplog)
'Outdated history for pair xyz. Last tick is 16 minutes old',
caplog.record_tuples
)
def test_get_signal_handles_exceptions(mocker, default_conf): def test_get_signal_handles_exceptions(mocker, default_conf):
@ -237,9 +234,8 @@ def test_analyze_ticker_default(ticker_history, mocker, caplog) -> None:
assert buy_mock.call_count == 1 assert buy_mock.call_count == 1
assert buy_mock.call_count == 1 assert buy_mock.call_count == 1
assert log_has('TA Analysis Launched', caplog.record_tuples) assert log_has('TA Analysis Launched', caplog)
assert not log_has('Skipping TA Analysis for already analyzed candle', assert not log_has('Skipping TA Analysis for already analyzed candle', caplog)
caplog.record_tuples)
caplog.clear() caplog.clear()
strategy.analyze_ticker(ticker_history, {'pair': 'ETH/BTC'}) strategy.analyze_ticker(ticker_history, {'pair': 'ETH/BTC'})
@ -247,9 +243,8 @@ def test_analyze_ticker_default(ticker_history, mocker, caplog) -> None:
assert ind_mock.call_count == 2 assert ind_mock.call_count == 2
assert buy_mock.call_count == 2 assert buy_mock.call_count == 2
assert buy_mock.call_count == 2 assert buy_mock.call_count == 2
assert log_has('TA Analysis Launched', caplog.record_tuples) assert log_has('TA Analysis Launched', caplog)
assert not log_has('Skipping TA Analysis for already analyzed candle', assert not log_has('Skipping TA Analysis for already analyzed candle', caplog)
caplog.record_tuples)
def test__analyze_ticker_internal_skip_analyze(ticker_history, mocker, caplog) -> None: def test__analyze_ticker_internal_skip_analyze(ticker_history, mocker, caplog) -> None:
@ -275,9 +270,8 @@ def test__analyze_ticker_internal_skip_analyze(ticker_history, mocker, caplog) -
assert ind_mock.call_count == 1 assert ind_mock.call_count == 1
assert buy_mock.call_count == 1 assert buy_mock.call_count == 1
assert buy_mock.call_count == 1 assert buy_mock.call_count == 1
assert log_has('TA Analysis Launched', caplog.record_tuples) assert log_has('TA Analysis Launched', caplog)
assert not log_has('Skipping TA Analysis for already analyzed candle', assert not log_has('Skipping TA Analysis for already analyzed candle', caplog)
caplog.record_tuples)
caplog.clear() caplog.clear()
ret = strategy._analyze_ticker_internal(ticker_history, {'pair': 'ETH/BTC'}) ret = strategy._analyze_ticker_internal(ticker_history, {'pair': 'ETH/BTC'})
@ -290,6 +284,21 @@ def test__analyze_ticker_internal_skip_analyze(ticker_history, mocker, caplog) -
assert 'sell' in ret.columns assert 'sell' in ret.columns
assert ret['buy'].sum() == 0 assert ret['buy'].sum() == 0
assert ret['sell'].sum() == 0 assert ret['sell'].sum() == 0
assert not log_has('TA Analysis Launched', caplog.record_tuples) assert not log_has('TA Analysis Launched', caplog)
assert log_has('Skipping TA Analysis for already analyzed candle', assert log_has('Skipping TA Analysis for already analyzed candle', caplog)
caplog.record_tuples)
def test_is_pair_locked(default_conf):
strategy = DefaultStrategy(default_conf)
# dict should be empty
assert not strategy._pair_locked_until
pair = 'ETH/BTC'
assert not strategy.is_pair_locked(pair)
strategy.lock_pair(pair, arrow.utcnow().shift(minutes=4).datetime)
# ETH/BTC locked for 4 minutes
assert strategy.is_pair_locked(pair)
# XRP/BTC should not be locked now
pair = 'XRP/BTC'
assert not strategy.is_pair_locked(pair)

View File

@ -15,7 +15,7 @@ from freqtrade.resolvers import StrategyResolver
from freqtrade.strategy import import_strategy from freqtrade.strategy import import_strategy
from freqtrade.strategy.default_strategy import DefaultStrategy from freqtrade.strategy.default_strategy import DefaultStrategy
from freqtrade.strategy.interface import IStrategy from freqtrade.strategy.interface import IStrategy
from freqtrade.tests.conftest import log_has_re from freqtrade.tests.conftest import log_has, log_has_re
def test_import_strategy(caplog): def test_import_strategy(caplog):
@ -35,12 +35,8 @@ def test_import_strategy(caplog):
assert imported_strategy.__module__ == 'freqtrade.strategy' assert imported_strategy.__module__ == 'freqtrade.strategy'
assert imported_strategy.some_method() == 42 assert imported_strategy.some_method() == 42
assert ( assert log_has('Imported strategy freqtrade.strategy.default_strategy.DefaultStrategy '
'freqtrade.strategy', 'as freqtrade.strategy.DefaultStrategy', caplog)
logging.DEBUG,
'Imported strategy freqtrade.strategy.default_strategy.DefaultStrategy '
'as freqtrade.strategy.DefaultStrategy',
) in caplog.record_tuples
def test_search_strategy(): def test_search_strategy():
@ -79,8 +75,7 @@ def test_load_strategy_base64(result, caplog, default_conf):
assert 'adx' in resolver.strategy.advise_indicators(result, {'pair': 'ETH/BTC'}) assert 'adx' in resolver.strategy.advise_indicators(result, {'pair': 'ETH/BTC'})
# Make sure strategy was loaded from base64 (using temp directory)!! # Make sure strategy was loaded from base64 (using temp directory)!!
assert log_has_re(r"Using resolved strategy TestStrategy from '" assert log_has_re(r"Using resolved strategy TestStrategy from '"
+ tempfile.gettempdir() + r"/.*/TestStrategy\.py'\.\.\.", + tempfile.gettempdir() + r"/.*/TestStrategy\.py'\.\.\.", caplog)
caplog.record_tuples)
def test_load_strategy_invalid_directory(result, caplog, default_conf): def test_load_strategy_invalid_directory(result, caplog, default_conf):
@ -88,7 +83,7 @@ def test_load_strategy_invalid_directory(result, caplog, default_conf):
extra_dir = Path.cwd() / 'some/path' extra_dir = Path.cwd() / 'some/path'
resolver._load_strategy('TestStrategy', config=default_conf, extra_dir=extra_dir) resolver._load_strategy('TestStrategy', config=default_conf, extra_dir=extra_dir)
assert log_has_re(r'Path .*' + r'some.*path.*' + r'.* does not exist', caplog.record_tuples) assert log_has_re(r'Path .*' + r'some.*path.*' + r'.* does not exist', caplog)
assert 'adx' in resolver.strategy.advise_indicators(result, {'pair': 'ETH/BTC'}) assert 'adx' in resolver.strategy.advise_indicators(result, {'pair': 'ETH/BTC'})
@ -108,7 +103,7 @@ def test_load_staticmethod_importerror(mocker, caplog, default_conf):
match=r"Impossible to load Strategy 'DefaultStrategy'. " match=r"Impossible to load Strategy 'DefaultStrategy'. "
r"This class does not exist or contains Python code errors."): r"This class does not exist or contains Python code errors."):
StrategyResolver(default_conf) StrategyResolver(default_conf)
assert log_has_re(r".*Error: can't pickle staticmethod objects", caplog.record_tuples) assert log_has_re(r".*Error: can't pickle staticmethod objects", caplog)
def test_strategy(result, default_conf): def test_strategy(result, default_conf):
@ -146,10 +141,7 @@ def test_strategy_override_minimal_roi(caplog, default_conf):
resolver = StrategyResolver(default_conf) resolver = StrategyResolver(default_conf)
assert resolver.strategy.minimal_roi[0] == 0.5 assert resolver.strategy.minimal_roi[0] == 0.5
assert ('freqtrade.resolvers.strategy_resolver', assert log_has("Override strategy 'minimal_roi' with value in config file: {'0': 0.5}.", caplog)
logging.INFO,
"Override strategy 'minimal_roi' with value in config file: {'0': 0.5}."
) in caplog.record_tuples
def test_strategy_override_stoploss(caplog, default_conf): def test_strategy_override_stoploss(caplog, default_conf):
@ -161,10 +153,7 @@ def test_strategy_override_stoploss(caplog, default_conf):
resolver = StrategyResolver(default_conf) resolver = StrategyResolver(default_conf)
assert resolver.strategy.stoploss == -0.5 assert resolver.strategy.stoploss == -0.5
assert ('freqtrade.resolvers.strategy_resolver', assert log_has("Override strategy 'stoploss' with value in config file: -0.5.", caplog)
logging.INFO,
"Override strategy 'stoploss' with value in config file: -0.5."
) in caplog.record_tuples
def test_strategy_override_trailing_stop(caplog, default_conf): def test_strategy_override_trailing_stop(caplog, default_conf):
@ -177,10 +166,7 @@ def test_strategy_override_trailing_stop(caplog, default_conf):
assert resolver.strategy.trailing_stop assert resolver.strategy.trailing_stop
assert isinstance(resolver.strategy.trailing_stop, bool) assert isinstance(resolver.strategy.trailing_stop, bool)
assert ('freqtrade.resolvers.strategy_resolver', assert log_has("Override strategy 'trailing_stop' with value in config file: True.", caplog)
logging.INFO,
"Override strategy 'trailing_stop' with value in config file: True."
) in caplog.record_tuples
def test_strategy_override_trailing_stop_positive(caplog, default_conf): def test_strategy_override_trailing_stop_positive(caplog, default_conf):
@ -194,16 +180,12 @@ def test_strategy_override_trailing_stop_positive(caplog, default_conf):
resolver = StrategyResolver(default_conf) resolver = StrategyResolver(default_conf)
assert resolver.strategy.trailing_stop_positive == -0.1 assert resolver.strategy.trailing_stop_positive == -0.1
assert ('freqtrade.resolvers.strategy_resolver', assert log_has("Override strategy 'trailing_stop_positive' with value in config file: -0.1.",
logging.INFO, caplog)
"Override strategy 'trailing_stop_positive' with value in config file: -0.1."
) in caplog.record_tuples
assert resolver.strategy.trailing_stop_positive_offset == -0.2 assert resolver.strategy.trailing_stop_positive_offset == -0.2
assert ('freqtrade.resolvers.strategy_resolver', assert log_has("Override strategy 'trailing_stop_positive' with value in config file: -0.1.",
logging.INFO, caplog)
"Override strategy 'trailing_stop_positive' with value in config file: -0.1."
) in caplog.record_tuples
def test_strategy_override_ticker_interval(caplog, default_conf): def test_strategy_override_ticker_interval(caplog, default_conf):
@ -218,10 +200,8 @@ def test_strategy_override_ticker_interval(caplog, default_conf):
assert resolver.strategy.ticker_interval == 60 assert resolver.strategy.ticker_interval == 60
assert resolver.strategy.stake_currency == 'ETH' assert resolver.strategy.stake_currency == 'ETH'
assert ('freqtrade.resolvers.strategy_resolver', assert log_has("Override strategy 'ticker_interval' with value in config file: 60.",
logging.INFO, caplog)
"Override strategy 'ticker_interval' with value in config file: 60."
) in caplog.record_tuples
def test_strategy_override_process_only_new_candles(caplog, default_conf): def test_strategy_override_process_only_new_candles(caplog, default_conf):
@ -234,10 +214,8 @@ def test_strategy_override_process_only_new_candles(caplog, default_conf):
resolver = StrategyResolver(default_conf) resolver = StrategyResolver(default_conf)
assert resolver.strategy.process_only_new_candles assert resolver.strategy.process_only_new_candles
assert ('freqtrade.resolvers.strategy_resolver', assert log_has("Override strategy 'process_only_new_candles' with value in config file: True.",
logging.INFO, caplog)
"Override strategy 'process_only_new_candles' with value in config file: True."
) in caplog.record_tuples
def test_strategy_override_order_types(caplog, default_conf): def test_strategy_override_order_types(caplog, default_conf):
@ -259,12 +237,9 @@ def test_strategy_override_order_types(caplog, default_conf):
for method in ['buy', 'sell', 'stoploss', 'stoploss_on_exchange']: for method in ['buy', 'sell', 'stoploss', 'stoploss_on_exchange']:
assert resolver.strategy.order_types[method] == order_types[method] assert resolver.strategy.order_types[method] == order_types[method]
assert ('freqtrade.resolvers.strategy_resolver', assert log_has("Override strategy 'order_types' with value in config file:"
logging.INFO, " {'buy': 'market', 'sell': 'limit', 'stoploss': 'limit',"
"Override strategy 'order_types' with value in config file:" " 'stoploss_on_exchange': True}.", caplog)
" {'buy': 'market', 'sell': 'limit', 'stoploss': 'limit',"
" 'stoploss_on_exchange': True}."
) in caplog.record_tuples
default_conf.update({ default_conf.update({
'strategy': 'DefaultStrategy', 'strategy': 'DefaultStrategy',
@ -295,11 +270,8 @@ def test_strategy_override_order_tif(caplog, default_conf):
for method in ['buy', 'sell']: for method in ['buy', 'sell']:
assert resolver.strategy.order_time_in_force[method] == order_time_in_force[method] assert resolver.strategy.order_time_in_force[method] == order_time_in_force[method]
assert ('freqtrade.resolvers.strategy_resolver', assert log_has("Override strategy 'order_time_in_force' with value in config file:"
logging.INFO, " {'buy': 'fok', 'sell': 'gtc'}.", caplog)
"Override strategy 'order_time_in_force' with value in config file:"
" {'buy': 'fok', 'sell': 'gtc'}."
) in caplog.record_tuples
default_conf.update({ default_conf.update({
'strategy': 'DefaultStrategy', 'strategy': 'DefaultStrategy',
@ -334,10 +306,7 @@ def test_strategy_override_use_sell_signal(caplog, default_conf):
assert resolver.strategy.use_sell_signal assert resolver.strategy.use_sell_signal
assert isinstance(resolver.strategy.use_sell_signal, bool) assert isinstance(resolver.strategy.use_sell_signal, bool)
assert ('freqtrade.resolvers.strategy_resolver', assert log_has("Override strategy 'use_sell_signal' with value in config file: True.", caplog)
logging.INFO,
"Override strategy 'use_sell_signal' with value in config file: True."
) in caplog.record_tuples
def test_strategy_override_use_sell_profit_only(caplog, default_conf): def test_strategy_override_use_sell_profit_only(caplog, default_conf):
@ -362,10 +331,7 @@ def test_strategy_override_use_sell_profit_only(caplog, default_conf):
assert resolver.strategy.sell_profit_only assert resolver.strategy.sell_profit_only
assert isinstance(resolver.strategy.sell_profit_only, bool) assert isinstance(resolver.strategy.sell_profit_only, bool)
assert ('freqtrade.resolvers.strategy_resolver', assert log_has("Override strategy 'sell_profit_only' with value in config file: True.", caplog)
logging.INFO,
"Override strategy 'sell_profit_only' with value in config file: True."
) in caplog.record_tuples
@pytest.mark.filterwarnings("ignore:deprecated") @pytest.mark.filterwarnings("ignore:deprecated")

View File

@ -3,7 +3,7 @@ import argparse
import pytest import pytest
from freqtrade.configuration import Arguments, TimeRange from freqtrade.configuration import Arguments
from freqtrade.configuration.arguments import ARGS_DOWNLOADER, ARGS_PLOT_DATAFRAME from freqtrade.configuration.arguments import ARGS_DOWNLOADER, ARGS_PLOT_DATAFRAME
from freqtrade.configuration.cli_options import check_int_positive from freqtrade.configuration.cli_options import check_int_positive
@ -86,30 +86,6 @@ def test_parse_args_strategy_path_invalid() -> None:
Arguments(['--strategy-path'], '').get_parsed_arg() Arguments(['--strategy-path'], '').get_parsed_arg()
def test_parse_timerange_incorrect() -> None:
assert TimeRange(None, 'line', 0, -200) == Arguments.parse_timerange('-200')
assert TimeRange('line', None, 200, 0) == Arguments.parse_timerange('200-')
assert TimeRange('index', 'index', 200, 500) == Arguments.parse_timerange('200-500')
assert TimeRange('date', None, 1274486400, 0) == Arguments.parse_timerange('20100522-')
assert TimeRange(None, 'date', 0, 1274486400) == Arguments.parse_timerange('-20100522')
timerange = Arguments.parse_timerange('20100522-20150730')
assert timerange == TimeRange('date', 'date', 1274486400, 1438214400)
# Added test for unix timestamp - BTC genesis date
assert TimeRange('date', None, 1231006505, 0) == Arguments.parse_timerange('1231006505-')
assert TimeRange(None, 'date', 0, 1233360000) == Arguments.parse_timerange('-1233360000')
timerange = Arguments.parse_timerange('1231006505-1233360000')
assert TimeRange('date', 'date', 1231006505, 1233360000) == timerange
# TODO: Find solution for the following case (passing timestamp in ms)
timerange = Arguments.parse_timerange('1231006505000-1233360000000')
assert TimeRange('date', 'date', 1231006505, 1233360000) != timerange
with pytest.raises(Exception, match=r'Incorrect syntax.*'):
Arguments.parse_timerange('-')
def test_parse_args_backtesting_invalid() -> None: def test_parse_args_backtesting_invalid() -> None:
with pytest.raises(SystemExit, match=r'2'): with pytest.raises(SystemExit, match=r'2'):
Arguments(['backtesting --ticker-interval'], '').get_parsed_arg() Arguments(['backtesting --ticker-interval'], '').get_parsed_arg()

View File

@ -73,7 +73,7 @@ def test__args_to_config(caplog):
# No warnings ... # No warnings ...
configuration._args_to_config(config, argname="strategy_path", logstring="DeadBeef") configuration._args_to_config(config, argname="strategy_path", logstring="DeadBeef")
assert len(w) == 0 assert len(w) == 0
assert log_has("DeadBeef", caplog.record_tuples) assert log_has("DeadBeef", caplog)
assert config['strategy_path'] == "TestTest" assert config['strategy_path'] == "TestTest"
configuration = Configuration(args) configuration = Configuration(args)
@ -85,7 +85,7 @@ def test__args_to_config(caplog):
assert len(w) == 1 assert len(w) == 1
assert issubclass(w[-1].category, DeprecationWarning) assert issubclass(w[-1].category, DeprecationWarning)
assert "DEPRECATED: Going away soon!" in str(w[-1].message) assert "DEPRECATED: Going away soon!" in str(w[-1].message)
assert log_has("DeadBeef", caplog.record_tuples) assert log_has("DeadBeef", caplog)
assert config['strategy_path'] == "TestTest" assert config['strategy_path'] == "TestTest"
@ -99,7 +99,7 @@ def test_load_config_max_open_trades_zero(default_conf, mocker, caplog) -> None:
assert validated_conf['max_open_trades'] == 0 assert validated_conf['max_open_trades'] == 0
assert 'internals' in validated_conf assert 'internals' in validated_conf
assert log_has('Validating configuration ...', caplog.record_tuples) assert log_has('Validating configuration ...', caplog)
def test_load_config_combine_dicts(default_conf, mocker, caplog) -> None: def test_load_config_combine_dicts(default_conf, mocker, caplog) -> None:
@ -131,7 +131,36 @@ def test_load_config_combine_dicts(default_conf, mocker, caplog) -> None:
assert validated_conf['exchange']['pair_whitelist'] == conf2['exchange']['pair_whitelist'] assert validated_conf['exchange']['pair_whitelist'] == conf2['exchange']['pair_whitelist']
assert 'internals' in validated_conf assert 'internals' in validated_conf
assert log_has('Validating configuration ...', caplog.record_tuples) assert log_has('Validating configuration ...', caplog)
def test_from_config(default_conf, mocker, caplog) -> None:
conf1 = deepcopy(default_conf)
conf2 = deepcopy(default_conf)
del conf1['exchange']['key']
del conf1['exchange']['secret']
del conf2['exchange']['name']
conf2['exchange']['pair_whitelist'] += ['NANO/BTC']
conf2['fiat_display_currency'] = "EUR"
config_files = [conf1, conf2]
configsmock = MagicMock(side_effect=config_files)
mocker.patch(
'freqtrade.configuration.configuration.load_config_file',
configsmock
)
validated_conf = Configuration.from_files(['test_conf.json', 'test2_conf.json'])
exchange_conf = default_conf['exchange']
assert validated_conf['exchange']['name'] == exchange_conf['name']
assert validated_conf['exchange']['key'] == exchange_conf['key']
assert validated_conf['exchange']['secret'] == exchange_conf['secret']
assert validated_conf['exchange']['pair_whitelist'] != conf1['exchange']['pair_whitelist']
assert validated_conf['exchange']['pair_whitelist'] == conf2['exchange']['pair_whitelist']
assert validated_conf['fiat_display_currency'] == "EUR"
assert 'internals' in validated_conf
assert log_has('Validating configuration ...', caplog)
def test_load_config_max_open_trades_minus_one(default_conf, mocker, caplog) -> None: def test_load_config_max_open_trades_minus_one(default_conf, mocker, caplog) -> None:
@ -144,7 +173,7 @@ def test_load_config_max_open_trades_minus_one(default_conf, mocker, caplog) ->
assert validated_conf['max_open_trades'] > 999999999 assert validated_conf['max_open_trades'] > 999999999
assert validated_conf['max_open_trades'] == float('inf') assert validated_conf['max_open_trades'] == float('inf')
assert log_has('Validating configuration ...', caplog.record_tuples) assert log_has('Validating configuration ...', caplog)
assert "runmode" in validated_conf assert "runmode" in validated_conf
assert validated_conf['runmode'] == RunMode.DRY_RUN assert validated_conf['runmode'] == RunMode.DRY_RUN
@ -281,8 +310,8 @@ def test_show_info(default_conf, mocker, caplog) -> None:
configuration = Configuration(args) configuration = Configuration(args)
configuration.get_config() configuration.get_config()
assert log_has('Using DB: "sqlite:///tmp/testdb"', caplog.record_tuples) assert log_has('Using DB: "sqlite:///tmp/testdb"', caplog)
assert log_has('Dry run is enabled', caplog.record_tuples) assert log_has('Dry run is enabled', caplog)
def test_setup_configuration_without_arguments(mocker, default_conf, caplog) -> None: def test_setup_configuration_without_arguments(mocker, default_conf, caplog) -> None:
@ -305,21 +334,18 @@ def test_setup_configuration_without_arguments(mocker, default_conf, caplog) ->
assert 'pair_whitelist' in config['exchange'] assert 'pair_whitelist' in config['exchange']
assert 'datadir' in config assert 'datadir' in config
assert 'user_data_dir' in config assert 'user_data_dir' in config
assert log_has( assert log_has('Using data directory: {} ...'.format(config['datadir']), caplog)
'Using data directory: {} ...'.format(config['datadir']),
caplog.record_tuples
)
assert 'ticker_interval' in config assert 'ticker_interval' in config
assert not log_has('Parameter -i/--ticker-interval detected ...', caplog.record_tuples) assert not log_has('Parameter -i/--ticker-interval detected ...', caplog)
assert 'live' not in config assert 'live' not in config
assert not log_has('Parameter -l/--live detected ...', caplog.record_tuples) assert not log_has('Parameter -l/--live detected ...', caplog)
assert 'position_stacking' not in config assert 'position_stacking' not in config
assert not log_has('Parameter --enable-position-stacking detected ...', caplog.record_tuples) assert not log_has('Parameter --enable-position-stacking detected ...', caplog)
assert 'refresh_pairs' not in config assert 'refresh_pairs' not in config
assert not log_has('Parameter -r/--refresh-pairs-cached detected ...', caplog.record_tuples) assert not log_has('Parameter -r/--refresh-pairs-cached detected ...', caplog)
assert 'timerange' not in config assert 'timerange' not in config
assert 'export' not in config assert 'export' not in config
@ -361,43 +387,31 @@ def test_setup_configuration_with_arguments(mocker, default_conf, caplog) -> Non
assert 'exchange' in config assert 'exchange' in config
assert 'pair_whitelist' in config['exchange'] assert 'pair_whitelist' in config['exchange']
assert 'datadir' in config assert 'datadir' in config
assert log_has( assert log_has('Using data directory: {} ...'.format("/foo/bar"), caplog)
'Using data directory: {} ...'.format("/foo/bar"), assert log_has('Using user-data directory: {} ...'.format("/tmp/freqtrade"), caplog)
caplog.record_tuples
)
assert log_has(
'Using user-data directory: {} ...'.format("/tmp/freqtrade"),
caplog.record_tuples
)
assert 'user_data_dir' in config assert 'user_data_dir' in config
assert 'ticker_interval' in config assert 'ticker_interval' in config
assert log_has('Parameter -i/--ticker-interval detected ... Using ticker_interval: 1m ...', assert log_has('Parameter -i/--ticker-interval detected ... Using ticker_interval: 1m ...',
caplog.record_tuples) caplog)
assert 'live' in config assert 'live' in config
assert log_has('Parameter -l/--live detected ...', caplog.record_tuples) assert log_has('Parameter -l/--live detected ...', caplog)
assert 'position_stacking'in config assert 'position_stacking'in config
assert log_has('Parameter --enable-position-stacking detected ...', caplog.record_tuples) assert log_has('Parameter --enable-position-stacking detected ...', caplog)
assert 'use_max_market_positions' in config assert 'use_max_market_positions' in config
assert log_has('Parameter --disable-max-market-positions detected ...', caplog.record_tuples) assert log_has('Parameter --disable-max-market-positions detected ...', caplog)
assert log_has('max_open_trades set to unlimited ...', caplog.record_tuples) assert log_has('max_open_trades set to unlimited ...', caplog)
assert 'refresh_pairs'in config assert 'refresh_pairs'in config
assert log_has('Parameter -r/--refresh-pairs-cached detected ...', caplog.record_tuples) assert log_has('Parameter -r/--refresh-pairs-cached detected ...', caplog)
assert 'timerange' in config assert 'timerange' in config
assert log_has( assert log_has('Parameter --timerange detected: {} ...'.format(config['timerange']), caplog)
'Parameter --timerange detected: {} ...'.format(config['timerange']),
caplog.record_tuples
)
assert 'export' in config assert 'export' in config
assert log_has( assert log_has('Parameter --export detected: {} ...'.format(config['export']), caplog)
'Parameter --export detected: {} ...'.format(config['export']),
caplog.record_tuples
)
def test_setup_configuration_with_stratlist(mocker, default_conf, caplog) -> None: def test_setup_configuration_with_stratlist(mocker, default_conf, caplog) -> None:
@ -427,16 +441,13 @@ def test_setup_configuration_with_stratlist(mocker, default_conf, caplog) -> Non
assert 'exchange' in config assert 'exchange' in config
assert 'pair_whitelist' in config['exchange'] assert 'pair_whitelist' in config['exchange']
assert 'datadir' in config assert 'datadir' in config
assert log_has( assert log_has('Using data directory: {} ...'.format(config['datadir']), caplog)
'Using data directory: {} ...'.format(config['datadir']),
caplog.record_tuples
)
assert 'ticker_interval' in config assert 'ticker_interval' in config
assert log_has('Parameter -i/--ticker-interval detected ... Using ticker_interval: 1m ...', assert log_has('Parameter -i/--ticker-interval detected ... Using ticker_interval: 1m ...',
caplog.record_tuples) caplog)
assert 'strategy_list' in config assert 'strategy_list' in config
assert log_has('Using strategy list of 2 Strategies', caplog.record_tuples) assert log_has('Using strategy list of 2 Strategies', caplog)
assert 'position_stacking' not in config assert 'position_stacking' not in config
@ -445,10 +456,7 @@ def test_setup_configuration_with_stratlist(mocker, default_conf, caplog) -> Non
assert 'timerange' not in config assert 'timerange' not in config
assert 'export' in config assert 'export' in config
assert log_has( assert log_has('Parameter --export detected: {} ...'.format(config['export']), caplog)
'Parameter --export detected: {} ...'.format(config['export']),
caplog.record_tuples
)
def test_hyperopt_with_arguments(mocker, default_conf, caplog) -> None: def test_hyperopt_with_arguments(mocker, default_conf, caplog) -> None:
@ -467,11 +475,11 @@ def test_hyperopt_with_arguments(mocker, default_conf, caplog) -> None:
assert 'epochs' in config assert 'epochs' in config
assert int(config['epochs']) == 10 assert int(config['epochs']) == 10
assert log_has('Parameter --epochs detected ... Will run Hyperopt with for 10 epochs ...', assert log_has('Parameter --epochs detected ... Will run Hyperopt with for 10 epochs ...',
caplog.record_tuples) caplog)
assert 'spaces' in config assert 'spaces' in config
assert config['spaces'] == ['all'] assert config['spaces'] == ['all']
assert log_has('Parameter -s/--spaces detected: [\'all\']', caplog.record_tuples) assert log_has('Parameter -s/--spaces detected: [\'all\']', caplog)
assert "runmode" in config assert "runmode" in config
assert config['runmode'] == RunMode.HYPEROPT assert config['runmode'] == RunMode.HYPEROPT
@ -481,38 +489,35 @@ def test_check_exchange(default_conf, caplog) -> None:
default_conf.get('exchange').update({'name': 'BITTREX'}) default_conf.get('exchange').update({'name': 'BITTREX'})
assert check_exchange(default_conf) assert check_exchange(default_conf)
assert log_has_re(r"Exchange .* is officially supported by the Freqtrade development team\.", assert log_has_re(r"Exchange .* is officially supported by the Freqtrade development team\.",
caplog.record_tuples) caplog)
caplog.clear() caplog.clear()
# Test an officially supported by Freqtrade team exchange # Test an officially supported by Freqtrade team exchange
default_conf.get('exchange').update({'name': 'binance'}) default_conf.get('exchange').update({'name': 'binance'})
assert check_exchange(default_conf) assert check_exchange(default_conf)
assert log_has_re(r"Exchange .* is officially supported by the Freqtrade development team\.", assert log_has_re(r"Exchange .* is officially supported by the Freqtrade development team\.",
caplog.record_tuples) caplog)
caplog.clear() caplog.clear()
# Test an available exchange, supported by ccxt # Test an available exchange, supported by ccxt
default_conf.get('exchange').update({'name': 'kraken'}) default_conf.get('exchange').update({'name': 'kraken'})
assert check_exchange(default_conf) assert check_exchange(default_conf)
assert log_has_re(r"Exchange .* is supported by ccxt and .* not officially supported " assert log_has_re(r"Exchange .* is supported by ccxt and .* not officially supported "
r"by the Freqtrade development team\. .*", r"by the Freqtrade development team\. .*", caplog)
caplog.record_tuples)
caplog.clear() caplog.clear()
# Test a 'bad' exchange, which known to have serious problems # Test a 'bad' exchange, which known to have serious problems
default_conf.get('exchange').update({'name': 'bitmex'}) default_conf.get('exchange').update({'name': 'bitmex'})
assert not check_exchange(default_conf) with pytest.raises(OperationalException,
assert log_has_re(r"Exchange .* is known to not work with the bot yet\. " match=r"Exchange .* is known to not work with the bot yet.*"):
r"Use it only for development and testing purposes\.", check_exchange(default_conf)
caplog.record_tuples)
caplog.clear() caplog.clear()
# Test a 'bad' exchange with check_for_bad=False # Test a 'bad' exchange with check_for_bad=False
default_conf.get('exchange').update({'name': 'bitmex'}) default_conf.get('exchange').update({'name': 'bitmex'})
assert check_exchange(default_conf, False) assert check_exchange(default_conf, False)
assert log_has_re(r"Exchange .* is supported by ccxt and .* not officially supported " assert log_has_re(r"Exchange .* is supported by ccxt and .* not officially supported "
r"by the Freqtrade development team\. .*", r"by the Freqtrade development team\. .*", caplog)
caplog.record_tuples)
caplog.clear() caplog.clear()
# Test an invalid exchange # Test an invalid exchange
@ -538,7 +543,7 @@ def test_cli_verbose_with_params(default_conf, mocker, caplog) -> None:
validated_conf = configuration.load_config() validated_conf = configuration.load_config()
assert validated_conf.get('verbosity') == 3 assert validated_conf.get('verbosity') == 3
assert log_has('Verbosity set to 3', caplog.record_tuples) assert log_has('Verbosity set to 3', caplog)
def test_set_loggers() -> None: def test_set_loggers() -> None:
@ -604,7 +609,7 @@ def test_load_config_warn_forcebuy(default_conf, mocker, caplog) -> None:
validated_conf = configuration.load_config() validated_conf = configuration.load_config()
assert validated_conf.get('forcebuy_enable') assert validated_conf.get('forcebuy_enable')
assert log_has('`forcebuy` RPC message enabled.', caplog.record_tuples) assert log_has('`forcebuy` RPC message enabled.', caplog)
def test_validate_default_conf(default_conf) -> None: def test_validate_default_conf(default_conf) -> None:
@ -617,7 +622,7 @@ def test_create_datadir(mocker, default_conf, caplog) -> None:
create_datadir(default_conf, '/foo/bar') create_datadir(default_conf, '/foo/bar')
assert md.call_args[1]['parents'] is True assert md.call_args[1]['parents'] is True
assert log_has('Created data directory: /foo/bar', caplog.record_tuples) assert log_has('Created data directory: /foo/bar', caplog)
def test_create_userdata_dir(mocker, default_conf, caplog) -> None: def test_create_userdata_dir(mocker, default_conf, caplog) -> None:
@ -675,6 +680,17 @@ def test_validate_tsl(default_conf):
configuration._validate_config_consistency(default_conf) configuration._validate_config_consistency(default_conf)
def test_load_config_test_comments() -> None:
"""
Load config with comments
"""
config_file = Path(__file__).parents[0] / "config_test_comments.json"
print(config_file)
conf = load_config_file(str(config_file))
assert conf
def test_load_config_default_exchange(all_conf) -> None: def test_load_config_default_exchange(all_conf) -> None:
""" """
config['exchange'] subtree has required options in it config['exchange'] subtree has required options in it

File diff suppressed because it is too large Load Diff

View File

@ -60,8 +60,8 @@ def test_main_fatal_exception(mocker, default_conf, caplog) -> None:
# Test Main + the KeyboardInterrupt exception # Test Main + the KeyboardInterrupt exception
with pytest.raises(SystemExit): with pytest.raises(SystemExit):
main(args) main(args)
assert log_has('Using config: config.json.example ...', caplog.record_tuples) assert log_has('Using config: config.json.example ...', caplog)
assert log_has('Fatal exception!', caplog.record_tuples) assert log_has('Fatal exception!', caplog)
def test_main_keyboard_interrupt(mocker, default_conf, caplog) -> None: def test_main_keyboard_interrupt(mocker, default_conf, caplog) -> None:
@ -77,8 +77,8 @@ def test_main_keyboard_interrupt(mocker, default_conf, caplog) -> None:
# Test Main + the KeyboardInterrupt exception # Test Main + the KeyboardInterrupt exception
with pytest.raises(SystemExit): with pytest.raises(SystemExit):
main(args) main(args)
assert log_has('Using config: config.json.example ...', caplog.record_tuples) assert log_has('Using config: config.json.example ...', caplog)
assert log_has('SIGINT received, aborting ...', caplog.record_tuples) assert log_has('SIGINT received, aborting ...', caplog)
def test_main_operational_exception(mocker, default_conf, caplog) -> None: def test_main_operational_exception(mocker, default_conf, caplog) -> None:
@ -97,8 +97,8 @@ def test_main_operational_exception(mocker, default_conf, caplog) -> None:
# Test Main + the KeyboardInterrupt exception # Test Main + the KeyboardInterrupt exception
with pytest.raises(SystemExit): with pytest.raises(SystemExit):
main(args) main(args)
assert log_has('Using config: config.json.example ...', caplog.record_tuples) assert log_has('Using config: config.json.example ...', caplog)
assert log_has('Oh snap!', caplog.record_tuples) assert log_has('Oh snap!', caplog)
def test_main_reload_conf(mocker, default_conf, caplog) -> None: def test_main_reload_conf(mocker, default_conf, caplog) -> None:
@ -121,7 +121,7 @@ def test_main_reload_conf(mocker, default_conf, caplog) -> None:
with pytest.raises(SystemExit): with pytest.raises(SystemExit):
main(['-c', 'config.json.example']) main(['-c', 'config.json.example'])
assert log_has('Using config: config.json.example ...', caplog.record_tuples) assert log_has('Using config: config.json.example ...', caplog)
assert worker_mock.call_count == 4 assert worker_mock.call_count == 4
assert reconfigure_mock.call_count == 1 assert reconfigure_mock.call_count == 1
assert isinstance(worker.freqtrade, FreqtradeBot) assert isinstance(worker.freqtrade, FreqtradeBot)

View File

@ -151,7 +151,7 @@ def test_update_with_bittrex(limit_buy_order, limit_sell_order, fee, caplog):
assert trade.close_date is None assert trade.close_date is None
assert log_has("LIMIT_BUY has been fulfilled for Trade(id=2, " assert log_has("LIMIT_BUY has been fulfilled for Trade(id=2, "
"pair=ETH/BTC, amount=90.99181073, open_rate=0.00001099, open_since=closed).", "pair=ETH/BTC, amount=90.99181073, open_rate=0.00001099, open_since=closed).",
caplog.record_tuples) caplog)
caplog.clear() caplog.clear()
trade.open_order_id = 'something' trade.open_order_id = 'something'
@ -162,7 +162,7 @@ def test_update_with_bittrex(limit_buy_order, limit_sell_order, fee, caplog):
assert trade.close_date is not None assert trade.close_date is not None
assert log_has("LIMIT_SELL has been fulfilled for Trade(id=2, " assert log_has("LIMIT_SELL has been fulfilled for Trade(id=2, "
"pair=ETH/BTC, amount=90.99181073, open_rate=0.00001099, open_since=closed).", "pair=ETH/BTC, amount=90.99181073, open_rate=0.00001099, open_since=closed).",
caplog.record_tuples) caplog)
@pytest.mark.usefixtures("init_persistence") @pytest.mark.usefixtures("init_persistence")
@ -184,7 +184,7 @@ def test_update_market_order(market_buy_order, market_sell_order, fee, caplog):
assert trade.close_date is None assert trade.close_date is None
assert log_has("MARKET_BUY has been fulfilled for Trade(id=1, " assert log_has("MARKET_BUY has been fulfilled for Trade(id=1, "
"pair=ETH/BTC, amount=91.99181073, open_rate=0.00004099, open_since=closed).", "pair=ETH/BTC, amount=91.99181073, open_rate=0.00004099, open_since=closed).",
caplog.record_tuples) caplog)
caplog.clear() caplog.clear()
trade.open_order_id = 'something' trade.open_order_id = 'something'
@ -195,7 +195,7 @@ def test_update_market_order(market_buy_order, market_sell_order, fee, caplog):
assert trade.close_date is not None assert trade.close_date is not None
assert log_has("MARKET_SELL has been fulfilled for Trade(id=1, " assert log_has("MARKET_SELL has been fulfilled for Trade(id=1, "
"pair=ETH/BTC, amount=91.99181073, open_rate=0.00004099, open_since=closed).", "pair=ETH/BTC, amount=91.99181073, open_rate=0.00004099, open_since=closed).",
caplog.record_tuples) caplog)
@pytest.mark.usefixtures("init_persistence") @pytest.mark.usefixtures("init_persistence")
@ -558,10 +558,9 @@ def test_migrate_new(mocker, default_conf, fee, caplog):
assert trade.ticker_interval is None assert trade.ticker_interval is None
assert trade.stoploss_order_id is None assert trade.stoploss_order_id is None
assert trade.stoploss_last_update is None assert trade.stoploss_last_update is None
assert log_has("trying trades_bak1", caplog.record_tuples) assert log_has("trying trades_bak1", caplog)
assert log_has("trying trades_bak2", caplog.record_tuples) assert log_has("trying trades_bak2", caplog)
assert log_has("Running database migration - backup available as trades_bak2", assert log_has("Running database migration - backup available as trades_bak2", caplog)
caplog.record_tuples)
def test_migrate_mid_state(mocker, default_conf, fee, caplog): def test_migrate_mid_state(mocker, default_conf, fee, caplog):
@ -621,9 +620,8 @@ def test_migrate_mid_state(mocker, default_conf, fee, caplog):
assert trade.max_rate == 0.0 assert trade.max_rate == 0.0
assert trade.stop_loss == 0.0 assert trade.stop_loss == 0.0
assert trade.initial_stop_loss == 0.0 assert trade.initial_stop_loss == 0.0
assert log_has("trying trades_bak0", caplog.record_tuples) assert log_has("trying trades_bak0", caplog)
assert log_has("Running database migration - backup available as trades_bak0", assert log_has("Running database migration - backup available as trades_bak0", caplog)
caplog.record_tuples)
def test_adjust_stop_loss(fee): def test_adjust_stop_loss(fee):

View File

@ -6,7 +6,7 @@ from unittest.mock import MagicMock
import plotly.graph_objects as go import plotly.graph_objects as go
from plotly.subplots import make_subplots from plotly.subplots import make_subplots
from freqtrade.configuration import Arguments, TimeRange from freqtrade.configuration import TimeRange
from freqtrade.data import history from freqtrade.data import history
from freqtrade.data.btanalysis import create_cum_profit, load_backtest_data from freqtrade.data.btanalysis import create_cum_profit, load_backtest_data
from freqtrade.plot.plotting import (add_indicators, add_profit, from freqtrade.plot.plotting import (add_indicators, add_profit,
@ -88,7 +88,7 @@ def test_add_indicators(default_conf, caplog):
# No indicator found # No indicator found
fig3 = add_indicators(fig=deepcopy(fig), row=3, indicators=['no_indicator'], data=data) fig3 = add_indicators(fig=deepcopy(fig), row=3, indicators=['no_indicator'], data=data)
assert fig == fig3 assert fig == fig3
assert log_has_re(r'Indicator "no_indicator" ignored\..*', caplog.record_tuples) assert log_has_re(r'Indicator "no_indicator" ignored\..*', caplog)
def test_plot_trades(caplog): def test_plot_trades(caplog):
@ -96,7 +96,7 @@ def test_plot_trades(caplog):
# nothing happens when no trades are available # nothing happens when no trades are available
fig = plot_trades(fig1, None) fig = plot_trades(fig1, None)
assert fig == fig1 assert fig == fig1
assert log_has("No trades found.", caplog.record_tuples) assert log_has("No trades found.", caplog)
pair = "ADA/BTC" pair = "ADA/BTC"
filename = history.make_testdata_path(None) / "backtest-result_test.json" filename = history.make_testdata_path(None) / "backtest-result_test.json"
trades = load_backtest_data(filename) trades = load_backtest_data(filename)
@ -151,8 +151,8 @@ def test_generate_candlestick_graph_no_signals_no_trades(default_conf, mocker, c
assert row_mock.call_count == 2 assert row_mock.call_count == 2
assert trades_mock.call_count == 1 assert trades_mock.call_count == 1
assert log_has("No buy-signals found.", caplog.record_tuples) assert log_has("No buy-signals found.", caplog)
assert log_has("No sell-signals found.", caplog.record_tuples) assert log_has("No sell-signals found.", caplog)
def test_generate_candlestick_graph_no_trades(default_conf, mocker): def test_generate_candlestick_graph_no_trades(default_conf, mocker):
@ -218,13 +218,13 @@ def test_generate_plot_file(mocker, caplog):
assert (plot_mock.call_args_list[0][1]['filename'] assert (plot_mock.call_args_list[0][1]['filename']
== "user_data/plots/freqtrade-plot-UNITTEST_BTC-5m.html") == "user_data/plots/freqtrade-plot-UNITTEST_BTC-5m.html")
assert log_has("Stored plot as user_data/plots/freqtrade-plot-UNITTEST_BTC-5m.html", assert log_has("Stored plot as user_data/plots/freqtrade-plot-UNITTEST_BTC-5m.html",
caplog.record_tuples) caplog)
def test_add_profit(): def test_add_profit():
filename = history.make_testdata_path(None) / "backtest-result_test.json" filename = history.make_testdata_path(None) / "backtest-result_test.json"
bt_data = load_backtest_data(filename) bt_data = load_backtest_data(filename)
timerange = Arguments.parse_timerange("20180110-20180112") timerange = TimeRange.parse_timerange("20180110-20180112")
df = history.load_pair_history(pair="POWR/BTC", ticker_interval='5m', df = history.load_pair_history(pair="POWR/BTC", ticker_interval='5m',
datadir=None, timerange=timerange) datadir=None, timerange=timerange)
@ -244,7 +244,7 @@ def test_add_profit():
def test_generate_profit_graph(): def test_generate_profit_graph():
filename = history.make_testdata_path(None) / "backtest-result_test.json" filename = history.make_testdata_path(None) / "backtest-result_test.json"
trades = load_backtest_data(filename) trades = load_backtest_data(filename)
timerange = Arguments.parse_timerange("20180110-20180112") timerange = TimeRange.parse_timerange("20180110-20180112")
pairs = ["POWR/BTC", "XLM/BTC"] pairs = ["POWR/BTC", "XLM/BTC"]
tickers = history.load_data(datadir=None, tickers = history.load_data(datadir=None,

View File

@ -0,0 +1,28 @@
# pragma pylint: disable=missing-docstring, C0103
import pytest
from freqtrade.configuration import TimeRange
def test_parse_timerange_incorrect() -> None:
assert TimeRange(None, 'line', 0, -200) == TimeRange.parse_timerange('-200')
assert TimeRange('line', None, 200, 0) == TimeRange.parse_timerange('200-')
assert TimeRange('index', 'index', 200, 500) == TimeRange.parse_timerange('200-500')
assert TimeRange('date', None, 1274486400, 0) == TimeRange.parse_timerange('20100522-')
assert TimeRange(None, 'date', 0, 1274486400) == TimeRange.parse_timerange('-20100522')
timerange = TimeRange.parse_timerange('20100522-20150730')
assert timerange == TimeRange('date', 'date', 1274486400, 1438214400)
# Added test for unix timestamp - BTC genesis date
assert TimeRange('date', None, 1231006505, 0) == TimeRange.parse_timerange('1231006505-')
assert TimeRange(None, 'date', 0, 1233360000) == TimeRange.parse_timerange('-1233360000')
timerange = TimeRange.parse_timerange('1231006505-1233360000')
assert TimeRange('date', 'date', 1231006505, 1233360000) == timerange
# TODO: Find solution for the following case (passing timestamp in ms)
timerange = TimeRange.parse_timerange('1231006505000-1233360000000')
assert TimeRange('date', 'date', 1231006505, 1233360000) != timerange
with pytest.raises(Exception, match=r'Incorrect syntax.*'):
TimeRange.parse_timerange('-')

View File

@ -127,11 +127,10 @@ class Worker(object):
time.sleep(duration) time.sleep(duration)
return result return result
def _process(self) -> bool: def _process(self) -> None:
logger.debug("========================================") logger.debug("========================================")
state_changed = False
try: try:
state_changed = self.freqtrade.process() self.freqtrade.process()
except TemporaryError as error: except TemporaryError as error:
logger.warning(f"Error: {error}, retrying in {constants.RETRY_TIMEOUT} seconds...") logger.warning(f"Error: {error}, retrying in {constants.RETRY_TIMEOUT} seconds...")
time.sleep(constants.RETRY_TIMEOUT) time.sleep(constants.RETRY_TIMEOUT)
@ -144,10 +143,6 @@ class Worker(object):
}) })
logger.exception('OperationalException. Stopping trader ...') logger.exception('OperationalException. Stopping trader ...')
self.freqtrade.state = State.STOPPED self.freqtrade.state = State.STOPPED
# TODO: The return value of _process() is not used apart tests
# and should (could) be eliminated later. See PR #1689.
# state_changed = True
return state_changed
def _reconfigure(self) -> None: def _reconfigure(self) -> None:
""" """

View File

@ -1,9 +1,9 @@
# requirements without requirements installable via conda # requirements without requirements installable via conda
# mainly used for Raspberry pi installs # mainly used for Raspberry pi installs
ccxt==1.18.1021 ccxt==1.18.1063
SQLAlchemy==1.3.6 SQLAlchemy==1.3.7
python-telegram-bot==11.1.0 python-telegram-bot==11.1.0
arrow==0.14.4 arrow==0.14.5
cachetools==3.1.1 cachetools==3.1.1
requests==2.22.0 requests==2.22.0
urllib3==1.25.3 urllib3==1.25.3
@ -23,10 +23,13 @@ filelock==3.0.12
py_find_1st==1.1.4 py_find_1st==1.1.4
#Load ticker files 30% faster #Load ticker files 30% faster
python-rapidjson==0.7.2 python-rapidjson==0.8.0
# Notify systemd # Notify systemd
sdnotify==0.3.2 sdnotify==0.3.2
# Api server # Api server
flask==1.1.1 flask==1.1.1
# Support for colorized terminal output
colorama==0.4.1

View File

@ -7,7 +7,7 @@ flake8==3.7.8
flake8-type-annotations==0.1.0 flake8-type-annotations==0.1.0
flake8-tidy-imports==2.0.0 flake8-tidy-imports==2.0.0
mypy==0.720 mypy==0.720
pytest==5.0.1 pytest==5.1.0
pytest-asyncio==0.10.0 pytest-asyncio==0.10.0
pytest-cov==2.7.1 pytest-cov==2.7.1
pytest-mock==1.10.4 pytest-mock==1.10.4

View File

@ -3,4 +3,4 @@
numpy==1.17.0 numpy==1.17.0
pandas==0.25.0 pandas==0.25.0
scipy==1.3.0 scipy==1.3.1

View File

@ -103,7 +103,7 @@ if not pairs or args.pairs_file:
timerange = TimeRange() timerange = TimeRange()
if args.days: if args.days:
time_since = arrow.utcnow().shift(days=-args.days).strftime("%Y%m%d") time_since = arrow.utcnow().shift(days=-args.days).strftime("%Y%m%d")
timerange = arguments.parse_timerange(f'{time_since}-') timerange = TimeRange.parse_timerange(f'{time_since}-')
logger.info(f'About to download pairs: {pairs}, intervals: {timeframes} to {dl_path}') logger.info(f'About to download pairs: {pairs}, intervals: {timeframes} to {dl_path}')

View File

@ -64,6 +64,7 @@ setup(name='freqtrade',
'py_find_1st', 'py_find_1st',
'python-rapidjson', 'python-rapidjson',
'sdnotify', 'sdnotify',
'colorama',
# from requirements.txt # from requirements.txt
'numpy', 'numpy',
'pandas', 'pandas',

View File

@ -1,11 +1,10 @@
# pragma pylint: disable=missing-docstring, invalid-name, pointless-string-statement # pragma pylint: disable=missing-docstring, invalid-name, pointless-string-statement
from functools import reduce from functools import reduce
from math import exp
from typing import Any, Callable, Dict, List from typing import Any, Callable, Dict, List
from datetime import datetime from datetime import datetime
import numpy as np# noqa F401 import numpy as np
import talib.abstract as ta import talib.abstract as ta
from pandas import DataFrame from pandas import DataFrame
from skopt.space import Categorical, Dimension, Integer, Real from skopt.space import Categorical, Dimension, Integer, Real
@ -16,7 +15,7 @@ from freqtrade.optimize.hyperopt_interface import IHyperOpt
class SampleHyperOpts(IHyperOpt): class SampleHyperOpts(IHyperOpt):
""" """
This is a sample hyperopt to inspire you. This is a sample Hyperopt to inspire you.
Feel free to customize it. Feel free to customize it.
More information in https://github.com/freqtrade/freqtrade/blob/develop/docs/hyperopt.md More information in https://github.com/freqtrade/freqtrade/blob/develop/docs/hyperopt.md
@ -37,32 +36,44 @@ class SampleHyperOpts(IHyperOpt):
""" """
@staticmethod @staticmethod
def populate_indicators(dataframe: DataFrame, metadata: dict) -> DataFrame: def populate_indicators(dataframe: DataFrame, metadata: dict) -> DataFrame:
"""
Add several indicators needed for buy and sell strategies defined below.
"""
# ADX
dataframe['adx'] = ta.ADX(dataframe) dataframe['adx'] = ta.ADX(dataframe)
# MACD
macd = ta.MACD(dataframe) macd = ta.MACD(dataframe)
dataframe['macd'] = macd['macd'] dataframe['macd'] = macd['macd']
dataframe['macdsignal'] = macd['macdsignal'] dataframe['macdsignal'] = macd['macdsignal']
# MFI
dataframe['mfi'] = ta.MFI(dataframe) dataframe['mfi'] = ta.MFI(dataframe)
# RSI
dataframe['rsi'] = ta.RSI(dataframe) dataframe['rsi'] = ta.RSI(dataframe)
# Stochastic Fast
stoch_fast = ta.STOCHF(dataframe) stoch_fast = ta.STOCHF(dataframe)
dataframe['fastd'] = stoch_fast['fastd'] dataframe['fastd'] = stoch_fast['fastd']
# Minus-DI
dataframe['minus_di'] = ta.MINUS_DI(dataframe) dataframe['minus_di'] = ta.MINUS_DI(dataframe)
# Bollinger bands # Bollinger bands
bollinger = qtpylib.bollinger_bands(qtpylib.typical_price(dataframe), window=20, stds=2) bollinger = qtpylib.bollinger_bands(qtpylib.typical_price(dataframe), window=20, stds=2)
dataframe['bb_lowerband'] = bollinger['lower'] dataframe['bb_lowerband'] = bollinger['lower']
dataframe['bb_upperband'] = bollinger['upper'] dataframe['bb_upperband'] = bollinger['upper']
# SAR
dataframe['sar'] = ta.SAR(dataframe) dataframe['sar'] = ta.SAR(dataframe)
return dataframe return dataframe
@staticmethod @staticmethod
def buy_strategy_generator(params: Dict[str, Any]) -> Callable: def buy_strategy_generator(params: Dict[str, Any]) -> Callable:
""" """
Define the buy strategy parameters to be used by hyperopt Define the buy strategy parameters to be used by Hyperopt.
""" """
def populate_buy_trend(dataframe: DataFrame, metadata: dict) -> DataFrame: def populate_buy_trend(dataframe: DataFrame, metadata: dict) -> DataFrame:
""" """
Buy strategy Hyperopt will build and use Buy strategy Hyperopt will build and use.
""" """
conditions = [] conditions = []
# GUARDS AND TRENDS # GUARDS AND TRENDS
if 'mfi-enabled' in params and params['mfi-enabled']: if 'mfi-enabled' in params and params['mfi-enabled']:
conditions.append(dataframe['mfi'] < params['mfi-value']) conditions.append(dataframe['mfi'] < params['mfi-value'])
@ -98,7 +109,7 @@ class SampleHyperOpts(IHyperOpt):
@staticmethod @staticmethod
def indicator_space() -> List[Dimension]: def indicator_space() -> List[Dimension]:
""" """
Define your Hyperopt space for searching strategy parameters Define your Hyperopt space for searching buy strategy parameters.
""" """
return [ return [
Integer(10, 25, name='mfi-value'), Integer(10, 25, name='mfi-value'),
@ -115,14 +126,14 @@ class SampleHyperOpts(IHyperOpt):
@staticmethod @staticmethod
def sell_strategy_generator(params: Dict[str, Any]) -> Callable: def sell_strategy_generator(params: Dict[str, Any]) -> Callable:
""" """
Define the sell strategy parameters to be used by hyperopt Define the sell strategy parameters to be used by Hyperopt.
""" """
def populate_sell_trend(dataframe: DataFrame, metadata: dict) -> DataFrame: def populate_sell_trend(dataframe: DataFrame, metadata: dict) -> DataFrame:
""" """
Sell strategy Hyperopt will build and use Sell strategy Hyperopt will build and use.
""" """
# print(params)
conditions = [] conditions = []
# GUARDS AND TRENDS # GUARDS AND TRENDS
if 'sell-mfi-enabled' in params and params['sell-mfi-enabled']: if 'sell-mfi-enabled' in params and params['sell-mfi-enabled']:
conditions.append(dataframe['mfi'] > params['sell-mfi-value']) conditions.append(dataframe['mfi'] > params['sell-mfi-value'])
@ -158,7 +169,7 @@ class SampleHyperOpts(IHyperOpt):
@staticmethod @staticmethod
def sell_indicator_space() -> List[Dimension]: def sell_indicator_space() -> List[Dimension]:
""" """
Define your Hyperopt space for searching sell strategy parameters Define your Hyperopt space for searching sell strategy parameters.
""" """
return [ return [
Integer(75, 100, name='sell-mfi-value'), Integer(75, 100, name='sell-mfi-value'),
@ -176,9 +187,9 @@ class SampleHyperOpts(IHyperOpt):
def populate_buy_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame: def populate_buy_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
""" """
Based on TA indicators. Should be a copy of from strategy Based on TA indicators. Should be a copy of same method from strategy.
must align to populate_indicators in this file Must align to populate_indicators in this file.
Only used when --spaces does not include buy Only used when --spaces does not include buy space.
""" """
dataframe.loc[ dataframe.loc[
( (
@ -193,9 +204,9 @@ class SampleHyperOpts(IHyperOpt):
def populate_sell_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame: def populate_sell_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
""" """
Based on TA indicators. Should be a copy of from strategy Based on TA indicators. Should be a copy of same method from strategy.
must align to populate_indicators in this file Must align to populate_indicators in this file.
Only used when --spaces does not include sell Only used when --spaces does not include sell space.
""" """
dataframe.loc[ dataframe.loc[
( (
@ -205,4 +216,5 @@ class SampleHyperOpts(IHyperOpt):
(dataframe['fastd'] > 54) (dataframe['fastd'] > 54)
), ),
'sell'] = 1 'sell'] = 1
return dataframe return dataframe