Merge branch 'freqtrade:develop' into develop

This commit is contained in:
smarmau 2022-10-02 21:03:35 +11:00 committed by GitHub
commit a17c32577f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
45 changed files with 467 additions and 180 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

View File

@ -60,11 +60,18 @@ Binance supports [time_in_force](configuration.md#understand-order_time_in_force
Binance supports `stoploss_on_exchange` and uses `stop-loss-limit` orders. It provides great advantages, so we recommend to benefit from it by enabling stoploss on exchange. Binance supports `stoploss_on_exchange` and uses `stop-loss-limit` orders. It provides great advantages, so we recommend to benefit from it by enabling stoploss on exchange.
On futures, Binance supports both `stop-limit` as well as `stop-market` orders. You can use either `"limit"` or `"market"` in the `order_types.stoploss` configuration setting to decide which type to use. On futures, Binance supports both `stop-limit` as well as `stop-market` orders. You can use either `"limit"` or `"market"` in the `order_types.stoploss` configuration setting to decide which type to use.
### Binance Blacklist ### Binance Blacklist recommendation
For Binance, it is suggested to add `"BNB/<STAKE>"` to your blacklist to avoid issues, unless you are willing to maintain enough extra `BNB` on the account or unless you're willing to disable using `BNB` for fees. For Binance, it is suggested to add `"BNB/<STAKE>"` to your blacklist to avoid issues, unless you are willing to maintain enough extra `BNB` on the account or unless you're willing to disable using `BNB` for fees.
Binance accounts may use `BNB` for fees, and if a trade happens to be on `BNB`, further trades may consume this position and make the initial BNB trade unsellable as the expected amount is not there anymore. Binance accounts may use `BNB` for fees, and if a trade happens to be on `BNB`, further trades may consume this position and make the initial BNB trade unsellable as the expected amount is not there anymore.
### Binance sites
Binance has been split into 2, and users must use the correct ccxt exchange ID for their exchange, otherwise API keys are not recognized.
* [binance.com](https://www.binance.com/) - International users. Use exchange id: `binance`.
* [binance.us](https://www.binance.us/) - US based users. Use exchange id: `binanceus`.
### Binance Futures ### Binance Futures
Binance has specific (unfortunately complex) [Futures Trading Quantitative Rules](https://www.binance.com/en/support/faq/4f462ebe6ff445d4a170be7d9e897272) which need to be followed, and which prohibit a too low stake-amount (among others) for too many orders. Binance has specific (unfortunately complex) [Futures Trading Quantitative Rules](https://www.binance.com/en/support/faq/4f462ebe6ff445d4a170be7d9e897272) which need to be followed, and which prohibit a too low stake-amount (among others) for too many orders.
@ -87,12 +94,14 @@ When trading on Binance Futures market, orderbook must be used because there is
}, },
``` ```
### Binance sites #### Binance futures settings
Binance has been split into 2, and users must use the correct ccxt exchange ID for their exchange, otherwise API keys are not recognized. Users will also have to have the futures-setting "Position Mode" set to "One-way Mode", and "Asset Mode" set to "Single-Asset Mode".
These settings will be checked on startup, and freqtrade will show an error if this setting is wrong.
* [binance.com](https://www.binance.com/) - International users. Use exchange id: `binance`. ![Binance futures settings](assets/binance_futures_settings.png)
* [binance.us](https://www.binance.us/) - US based users. Use exchange id: `binanceus`.
Freqtrade will not attempt to change these settings.
## Kraken ## Kraken

View File

@ -27,8 +27,7 @@ Mandatory parameters are marked as **Required** and have to be set in one of the
| `weight_factor` | Weight training data points according to their recency (see details [here](freqai-feature-engineering.md#weighting-features-for-temporal-importance)). <br> **Datatype:** Positive float (typically < 1). | `weight_factor` | Weight training data points according to their recency (see details [here](freqai-feature-engineering.md#weighting-features-for-temporal-importance)). <br> **Datatype:** Positive float (typically < 1).
| `indicator_max_period_candles` | **No longer used (#7325)**. Replaced by `startup_candle_count` which is set in the [strategy](freqai-configuration.md#building-a-freqai-strategy). `startup_candle_count` is timeframe independent and defines the maximum *period* used in `populate_any_indicators()` for indicator creation. `FreqAI` uses this parameter together with the maximum timeframe in `include_time_frames` to calculate how many data points to download such that the first data point does not include a NaN <br> **Datatype:** Positive integer. | `indicator_max_period_candles` | **No longer used (#7325)**. Replaced by `startup_candle_count` which is set in the [strategy](freqai-configuration.md#building-a-freqai-strategy). `startup_candle_count` is timeframe independent and defines the maximum *period* used in `populate_any_indicators()` for indicator creation. `FreqAI` uses this parameter together with the maximum timeframe in `include_time_frames` to calculate how many data points to download such that the first data point does not include a NaN <br> **Datatype:** Positive integer.
| `indicator_periods_candles` | Time periods to calculate indicators for. The indicators are added to the base indicator dataset. <br> **Datatype:** List of positive integers. | `indicator_periods_candles` | Time periods to calculate indicators for. The indicators are added to the base indicator dataset. <br> **Datatype:** List of positive integers.
| `stratify_training_data` | Split the feature set into training and testing datasets. For example, `stratify_training_data: 2` would set every 2nd data point into a separate dataset to be pulled from during training/testing. See details about how it works [here](freqai-running.md#data-stratification-for-training-and-testing-the-model). <br> **Datatype:** Positive integer. | `principal_component_analysis` | Automatically reduce the dimensionality of the data set using Principal Component Analysis. See details about how it works [here](#reducing-data-dimensionality-with-principal-component-analysis) <br> **Datatype:** Boolean. defaults to `False`.
| `principal_component_analysis` | Automatically reduce the dimensionality of the data set using Principal Component Analysis. See details about how it works [here](#reducing-data-dimensionality-with-principal-component-analysis) <br> **Datatype:** Boolean. defaults to `false`.
| `plot_feature_importances` | Create a feature importance plot for each model for the top/bottom `plot_feature_importances` number of features.<br> **Datatype:** Integer, defaults to `0`. | `plot_feature_importances` | Create a feature importance plot for each model for the top/bottom `plot_feature_importances` number of features.<br> **Datatype:** Integer, defaults to `0`.
| `DI_threshold` | Activates the use of the Dissimilarity Index for outlier detection when set to > 0. See details about how it works [here](freqai-feature-engineering.md#identifying-outliers-with-the-dissimilarity-index-di). <br> **Datatype:** Positive float (typically < 1). | `DI_threshold` | Activates the use of the Dissimilarity Index for outlier detection when set to > 0. See details about how it works [here](freqai-feature-engineering.md#identifying-outliers-with-the-dissimilarity-index-di). <br> **Datatype:** Positive float (typically < 1).
| `use_SVM_to_remove_outliers` | Train a support vector machine to detect and remove outliers from the training dataset, as well as from incoming data points. See details about how it works [here](freqai-feature-engineering.md#identifying-outliers-using-a-support-vector-machine-svm). <br> **Datatype:** Boolean. | `use_SVM_to_remove_outliers` | Train a support vector machine to detect and remove outliers from the training dataset, as well as from incoming data points. See details about how it works [here](freqai-feature-engineering.md#identifying-outliers-using-a-support-vector-machine-svm). <br> **Datatype:** Boolean.
@ -41,7 +40,7 @@ Mandatory parameters are marked as **Required** and have to be set in one of the
| | **Data split parameters** | | **Data split parameters**
| `data_split_parameters` | Include any additional parameters available from Scikit-learn `test_train_split()`, which are shown [here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) (external website). <br> **Datatype:** Dictionary. | `data_split_parameters` | Include any additional parameters available from Scikit-learn `test_train_split()`, which are shown [here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) (external website). <br> **Datatype:** Dictionary.
| `test_size` | The fraction of data that should be used for testing instead of training. <br> **Datatype:** Positive float < 1. | `test_size` | The fraction of data that should be used for testing instead of training. <br> **Datatype:** Positive float < 1.
| `shuffle` | Shuffle the training data points during training. Typically, for time-series forecasting, this is set to `False`. <br> **Datatype:** Boolean. | `shuffle` | Shuffle the training data points during training. Typically, to not remove the chronological order of data in time-series forecasting, this is set to `False`. <br> **Datatype:** Boolean. <br> Defaut: `False`.
| | **Model training parameters** | | **Model training parameters**
| `model_training_parameters` | A flexible dictionary that includes all parameters available by the selected model library. For example, if you use `LightGBMRegressor`, this dictionary can contain any parameter available by the `LightGBMRegressor` [here](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRegressor.html) (external website). If you select a different model, this dictionary can contain any parameter from that model. <br> **Datatype:** Dictionary. | `model_training_parameters` | A flexible dictionary that includes all parameters available by the selected model library. For example, if you use `LightGBMRegressor`, this dictionary can contain any parameter available by the `LightGBMRegressor` [here](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRegressor.html) (external website). If you select a different model, this dictionary can contain any parameter from that model. <br> **Datatype:** Dictionary.
| `n_estimators` | The number of boosted trees to fit in regression. <br> **Datatype:** Integer. | `n_estimators` | The number of boosted trees to fit in regression. <br> **Datatype:** Integer.

View File

@ -105,23 +105,6 @@ During dry/live mode, FreqAI trains each coin pair sequentially (on separate thr
In the presented example config, the user will only allow predictions on models that are less than 1/2 hours old. In the presented example config, the user will only allow predictions on models that are less than 1/2 hours old.
## Data stratification for training and testing the model
You can stratify (group) the training/testing data using:
```json
"freqai": {
"feature_parameters" : {
"stratify_training_data": 3
}
}
```
This will split the data chronologically so that every Xth data point is used to test the model after training. In the example above, the user is asking for every third data point in the dataframe to be used for
testing; the other points are used for training.
The test data is used to evaluate the performance of the model after training. If the test score is high, the model is able to capture the behavior of the data well. If the test score is low, either the model does not capture the complexity of the data, the test data is significantly different from the train data, or a different type of model should be used.
## Controlling the model learning process ## Controlling the model learning process
Model training parameters are unique to the selected machine learning library. FreqAI allows you to set any parameter for any library using the `model_training_parameters` dictionary in the config. The example config (found in `config_examples/config_freqai.example.json`) shows some of the example parameters associated with `Catboost` and `LightGBM`, but you can add any parameters available in those libraries or any other machine learning library you choose to implement. Model training parameters are unique to the selected machine learning library. FreqAI allows you to set any parameter for any library using the `model_training_parameters` dictionary in the config. The example config (found in `config_examples/config_freqai.example.json`) shows some of the example parameters associated with `Catboost` and `LightGBM`, but you can add any parameters available in those libraries or any other machine learning library you choose to implement.

View File

@ -22,6 +22,7 @@ You may also use something like `.*DOWN/BTC` or `.*UP/BTC` to exclude leveraged
* [`StaticPairList`](#static-pair-list) (default, if not configured differently) * [`StaticPairList`](#static-pair-list) (default, if not configured differently)
* [`VolumePairList`](#volume-pair-list) * [`VolumePairList`](#volume-pair-list)
* [`ProducerPairList`](#producerpairlist)
* [`AgeFilter`](#agefilter) * [`AgeFilter`](#agefilter)
* [`OffsetFilter`](#offsetfilter) * [`OffsetFilter`](#offsetfilter)
* [`PerformanceFilter`](#performancefilter) * [`PerformanceFilter`](#performancefilter)
@ -84,7 +85,7 @@ Filtering instances (not the first position in the list) will not apply any cach
You can define a minimum volume with `min_value` - which will filter out pairs with a volume lower than the specified value in the specified timerange. You can define a minimum volume with `min_value` - which will filter out pairs with a volume lower than the specified value in the specified timerange.
### VolumePairList Advanced mode ##### VolumePairList Advanced mode
`VolumePairList` can also operate in an advanced mode to build volume over a given timerange of specified candle size. It utilizes exchange historical candle data, builds a typical price (calculated by (open+high+low)/3) and multiplies the typical price with every candle's volume. The sum is the `quoteVolume` over the given range. This allows different scenarios, for a more smoothened volume, when using longer ranges with larger candle sizes, or the opposite when using a short range with small candles. `VolumePairList` can also operate in an advanced mode to build volume over a given timerange of specified candle size. It utilizes exchange historical candle data, builds a typical price (calculated by (open+high+low)/3) and multiplies the typical price with every candle's volume. The sum is the `quoteVolume` over the given range. This allows different scenarios, for a more smoothened volume, when using longer ranges with larger candle sizes, or the opposite when using a short range with small candles.
@ -146,6 +147,32 @@ More sophisticated approach can be used, by using `lookback_timeframe` for candl
!!! Note !!! Note
`VolumePairList` does not support backtesting mode. `VolumePairList` does not support backtesting mode.
#### ProducerPairList
With `ProducerPairList`, you can reuse the pairlist from a [Producer](producer-consumer.md) without explicitly defining the pairlist on each consumer.
[Consumer mode](producer-consumer.md) is required for this pairlist to work.
The pairlist will perform a check on active pairs against the current exchange configuration to avoid attempting to trade on invalid markets.
You can limit the length of the pairlist with the optional parameter `number_assets`. Using `"number_assets"=0` or omitting this key will result in the reuse of all producer pairs valid for the current setup.
```json
"pairlists": [
{
"method": "ProducerPairList",
"number_assets": 5,
"producer_name": "default",
}
],
```
!!! Tip "Combining pairlists"
This pairlist can be combined with all other pairlists and filters for further pairlist reduction, and can also act as an "additional" pairlist, on top of already defined pairs.
`ProducerPairList` can also be used multiple times in sequence, combining the pairs from multiple producers.
Obviously in complex such configurations, the Producer may not provide data for all pairs, so the strategy must be fit for this.
#### AgeFilter #### AgeFilter
Removes pairs that have been listed on the exchange for less than `min_days_listed` days (defaults to `10`) or more than `max_days_listed` days (defaults `None` mean infinity). Removes pairs that have been listed on the exchange for less than `min_days_listed` days (defaults to `10`) or more than `max_days_listed` days (defaults `None` mean infinity).

View File

@ -643,7 +643,7 @@ This callback is **not** called when there is an open order (either buy or sell)
Additional Buys are ignored once you have reached the maximum amount of extra buys that you have set on `max_entry_position_adjustment`, but the callback is called anyway looking for partial exits. Additional Buys are ignored once you have reached the maximum amount of extra buys that you have set on `max_entry_position_adjustment`, but the callback is called anyway looking for partial exits.
Position adjustments will always be applied in the direction of the trade, so a positive value will always increase your position (negative values will decrease your position), no matter if it's a long or short trade. Modifications to leverage are not possible. Position adjustments will always be applied in the direction of the trade, so a positive value will always increase your position (negative values will decrease your position), no matter if it's a long or short trade. Modifications to leverage are not possible, and the stake-amount is assumed to be before applying leverage.
!!! Note "About stake size" !!! Note "About stake size"
Using fixed stake size means it will be the amount used for the first order, just like without position adjustment. Using fixed stake size means it will be the amount used for the first order, just like without position adjustment.

View File

@ -37,3 +37,12 @@ pip install -e .
# Ensure freqUI is at the latest version # Ensure freqUI is at the latest version
freqtrade install-ui freqtrade install-ui
``` ```
### Problems updating
Update-problems usually come missing dependencies (you didn't follow the above instructions) - or from updated dependencies, which fail to install (for example TA-lib).
Please refer to the corresponding installation sections (common problems linked below)
Common problems and their solutions:
* [ta-lib update on windows](windows_installation.md#2-install-ta-lib)

View File

@ -34,7 +34,7 @@ python -m venv .env
.env\Scripts\activate.ps1 .env\Scripts\activate.ps1
# optionally install ta-lib from wheel # optionally install ta-lib from wheel
# Eventually adjust the below filename to match the downloaded wheel # Eventually adjust the below filename to match the downloaded wheel
pip install --find-links build_helpers\ TA-Lib pip install --find-links build_helpers\ TA-Lib -U
pip install -r requirements.txt pip install -r requirements.txt
pip install -e . pip install -e .
freqtrade freqtrade

View File

@ -31,7 +31,7 @@ HYPEROPT_LOSS_BUILTIN = ['ShortTradeDurHyperOptLoss', 'OnlyProfitHyperOptLoss',
'CalmarHyperOptLoss', 'CalmarHyperOptLoss',
'MaxDrawDownHyperOptLoss', 'MaxDrawDownRelativeHyperOptLoss', 'MaxDrawDownHyperOptLoss', 'MaxDrawDownRelativeHyperOptLoss',
'ProfitDrawDownHyperOptLoss'] 'ProfitDrawDownHyperOptLoss']
AVAILABLE_PAIRLISTS = ['StaticPairList', 'VolumePairList', AVAILABLE_PAIRLISTS = ['StaticPairList', 'VolumePairList', 'ProducerPairList',
'AgeFilter', 'OffsetFilter', 'PerformanceFilter', 'AgeFilter', 'OffsetFilter', 'PerformanceFilter',
'PrecisionFilter', 'PriceFilter', 'RangeStabilityFilter', 'PrecisionFilter', 'PriceFilter', 'RangeStabilityFilter',
'ShuffleFilter', 'SpreadFilter', 'VolatilityFilter'] 'ShuffleFilter', 'SpreadFilter', 'VolatilityFilter']
@ -568,6 +568,7 @@ CONF_SCHEMA = {
"properties": { "properties": {
"test_size": {"type": "number"}, "test_size": {"type": "number"},
"random_state": {"type": "integer"}, "random_state": {"type": "integer"},
"shuffle": {"type": "boolean", "default": False}
}, },
}, },
"model_training_parameters": { "model_training_parameters": {

View File

@ -47,8 +47,7 @@ def ohlcv_to_dataframe(ohlcv: list, timeframe: str, pair: str, *,
def clean_ohlcv_dataframe(data: DataFrame, timeframe: str, pair: str, *, def clean_ohlcv_dataframe(data: DataFrame, timeframe: str, pair: str, *,
fill_missing: bool = True, fill_missing: bool, drop_incomplete: bool) -> DataFrame:
drop_incomplete: bool = True) -> DataFrame:
""" """
Cleanse a OHLCV dataframe by Cleanse a OHLCV dataframe by
* Grouping it by date (removes duplicate tics) * Grouping it by date (removes duplicate tics)

View File

@ -26,7 +26,7 @@ def load_pair_history(pair: str,
datadir: Path, *, datadir: Path, *,
timerange: Optional[TimeRange] = None, timerange: Optional[TimeRange] = None,
fill_up_missing: bool = True, fill_up_missing: bool = True,
drop_incomplete: bool = True, drop_incomplete: bool = False,
startup_candles: int = 0, startup_candles: int = 0,
data_format: str = None, data_format: str = None,
data_handler: IDataHandler = None, data_handler: IDataHandler = None,

View File

@ -275,7 +275,7 @@ class IDataHandler(ABC):
candle_type: CandleType, *, candle_type: CandleType, *,
timerange: Optional[TimeRange] = None, timerange: Optional[TimeRange] = None,
fill_missing: bool = True, fill_missing: bool = True,
drop_incomplete: bool = True, drop_incomplete: bool = False,
startup_candles: int = 0, startup_candles: int = 0,
warn_no_data: bool = True, warn_no_data: bool = True,
) -> DataFrame: ) -> DataFrame:

View File

@ -68,6 +68,37 @@ class Binance(Exchange):
tickers = deep_merge_dicts(bidsasks, tickers, allow_null_overrides=False) tickers = deep_merge_dicts(bidsasks, tickers, allow_null_overrides=False)
return tickers return tickers
@retrier
def additional_exchange_init(self) -> None:
"""
Additional exchange initialization logic.
.api will be available at this point.
Must be overridden in child methods if required.
"""
try:
if self.trading_mode == TradingMode.FUTURES and not self._config['dry_run']:
position_side = self._api.fapiPrivateGetPositionsideDual()
self._log_exchange_response('position_side_setting', position_side)
assets_margin = self._api.fapiPrivateGetMultiAssetsMargin()
self._log_exchange_response('multi_asset_margin', assets_margin)
msg = ""
if position_side.get('dualSidePosition') is True:
msg += (
"\nHedge Mode is not supported by freqtrade. "
"Please change 'Position Mode' on your binance futures account.")
if assets_margin.get('multiAssetsMargin') is True:
msg += ("\nMulti-Asset Mode is not supported by freqtrade. "
"Please change 'Asset Mode' on your binance futures account.")
if msg:
raise OperationalException(msg)
except ccxt.DDoSProtection as e:
raise DDosProtection(e) from e
except (ccxt.NetworkError, ccxt.ExchangeError) as e:
raise TemporaryError(
f'Could not set leverage due to {e.__class__.__name__}. Message: {e}') from e
except ccxt.BaseError as e:
raise OperationalException(e) from e
@retrier @retrier
def _set_leverage( def _set_leverage(
self, self,

View File

@ -1292,7 +1292,7 @@ class Exchange:
order = self.fetch_order(order_id, pair) order = self.fetch_order(order_id, pair)
except InvalidOrderException: except InvalidOrderException:
logger.warning(f"Could not fetch cancelled order {order_id}.") logger.warning(f"Could not fetch cancelled order {order_id}.")
order = {'fee': {}, 'status': 'canceled', 'amount': amount, 'info': {}} order = {'id': order_id, 'fee': {}, 'status': 'canceled', 'amount': amount, 'info': {}}
return order return order

View File

@ -78,7 +78,8 @@ class Okx(Exchange):
raise DDosProtection(e) from e raise DDosProtection(e) from e
except (ccxt.NetworkError, ccxt.ExchangeError) as e: except (ccxt.NetworkError, ccxt.ExchangeError) as e:
raise TemporaryError( raise TemporaryError(
f'Could not set leverage due to {e.__class__.__name__}. Message: {e}') from e f'Error in additional_exchange_init due to {e.__class__.__name__}. Message: {e}'
) from e
except ccxt.BaseError as e: except ccxt.BaseError as e:
raise OperationalException(e) from e raise OperationalException(e) from e

View File

@ -92,7 +92,7 @@ class BaseClassifierModel(IFreqaiModel):
filtered_df = dk.normalize_data_from_metadata(filtered_df) filtered_df = dk.normalize_data_from_metadata(filtered_df)
dk.data_dictionary["prediction_features"] = filtered_df dk.data_dictionary["prediction_features"] = filtered_df
self.data_cleaning_predict(dk, filtered_df) self.data_cleaning_predict(dk)
predictions = self.model.predict(dk.data_dictionary["prediction_features"]) predictions = self.model.predict(dk.data_dictionary["prediction_features"])
pred_df = DataFrame(predictions, columns=dk.label_list) pred_df = DataFrame(predictions, columns=dk.label_list)

View File

@ -92,7 +92,7 @@ class BaseRegressionModel(IFreqaiModel):
dk.data_dictionary["prediction_features"] = filtered_df dk.data_dictionary["prediction_features"] = filtered_df
# optional additional data cleaning/analysis # optional additional data cleaning/analysis
self.data_cleaning_predict(dk, filtered_df) self.data_cleaning_predict(dk)
predictions = self.model.predict(dk.data_dictionary["prediction_features"]) predictions = self.model.predict(dk.data_dictionary["prediction_features"])
pred_df = DataFrame(predictions, columns=dk.label_list) pred_df = DataFrame(predictions, columns=dk.label_list)

View File

@ -423,7 +423,7 @@ class FreqaiDataDrawer:
dk.data["data_path"] = str(dk.data_path) dk.data["data_path"] = str(dk.data_path)
dk.data["model_filename"] = str(dk.model_filename) dk.data["model_filename"] = str(dk.model_filename)
dk.data["training_features_list"] = list(dk.data_dictionary["train_features"].columns) dk.data["training_features_list"] = dk.training_features_list
dk.data["label_list"] = dk.label_list dk.data["label_list"] = dk.label_list
# store the metadata # store the metadata
with open(save_path / f"{dk.model_filename}_metadata.json", "w") as fp: with open(save_path / f"{dk.model_filename}_metadata.json", "w") as fp:

View File

@ -134,20 +134,15 @@ class FreqaiDataKitchen:
""" """
feat_dict = self.freqai_config["feature_parameters"] feat_dict = self.freqai_config["feature_parameters"]
if 'shuffle' not in self.freqai_config['data_split_parameters']:
self.freqai_config["data_split_parameters"].update({'shuffle': False})
weights: npt.ArrayLike weights: npt.ArrayLike
if feat_dict.get("weight_factor", 0) > 0: if feat_dict.get("weight_factor", 0) > 0:
weights = self.set_weights_higher_recent(len(filtered_dataframe)) weights = self.set_weights_higher_recent(len(filtered_dataframe))
else: else:
weights = np.ones(len(filtered_dataframe)) weights = np.ones(len(filtered_dataframe))
if feat_dict.get("stratify_training_data", 0) > 0:
stratification = np.zeros(len(filtered_dataframe))
for i in range(1, len(stratification)):
if i % feat_dict.get("stratify_training_data", 0) == 0:
stratification[i] = 1
else:
stratification = None
if self.freqai_config.get('data_split_parameters', {}).get('test_size', 0.1) != 0: if self.freqai_config.get('data_split_parameters', {}).get('test_size', 0.1) != 0:
( (
train_features, train_features,
@ -160,7 +155,6 @@ class FreqaiDataKitchen:
filtered_dataframe[: filtered_dataframe.shape[0]], filtered_dataframe[: filtered_dataframe.shape[0]],
labels, labels,
weights, weights,
stratify=stratification,
**self.config["freqai"]["data_split_parameters"], **self.config["freqai"]["data_split_parameters"],
) )
else: else:
@ -210,7 +204,7 @@ class FreqaiDataKitchen:
filtered_df = unfiltered_df.filter(training_feature_list, axis=1) filtered_df = unfiltered_df.filter(training_feature_list, axis=1)
filtered_df = filtered_df.replace([np.inf, -np.inf], np.nan) filtered_df = filtered_df.replace([np.inf, -np.inf], np.nan)
drop_index = pd.isnull(filtered_df).any(1) # get the rows that have NaNs, drop_index = pd.isnull(filtered_df).any(axis=1) # get the rows that have NaNs,
drop_index = drop_index.replace(True, 1).replace(False, 0) # pep8 requirement. drop_index = drop_index.replace(True, 1).replace(False, 0) # pep8 requirement.
if (training_filter): if (training_filter):
const_cols = list((filtered_df.nunique() == 1).loc[lambda x: x].index) const_cols = list((filtered_df.nunique() == 1).loc[lambda x: x].index)
@ -221,7 +215,7 @@ class FreqaiDataKitchen:
# about removing any row with NaNs # about removing any row with NaNs
# if labels has multiple columns (user wants to train multiple modelEs), we detect here # if labels has multiple columns (user wants to train multiple modelEs), we detect here
labels = unfiltered_df.filter(label_list, axis=1) labels = unfiltered_df.filter(label_list, axis=1)
drop_index_labels = pd.isnull(labels).any(1) drop_index_labels = pd.isnull(labels).any(axis=1)
drop_index_labels = drop_index_labels.replace(True, 1).replace(False, 0) drop_index_labels = drop_index_labels.replace(True, 1).replace(False, 0)
dates = unfiltered_df['date'] dates = unfiltered_df['date']
filtered_df = filtered_df[ filtered_df = filtered_df[
@ -249,7 +243,7 @@ class FreqaiDataKitchen:
else: else:
# we are backtesting so we need to preserve row number to send back to strategy, # we are backtesting so we need to preserve row number to send back to strategy,
# so now we use do_predict to avoid any prediction based on a NaN # so now we use do_predict to avoid any prediction based on a NaN
drop_index = pd.isnull(filtered_df).any(1) drop_index = pd.isnull(filtered_df).any(axis=1)
self.data["filter_drop_index_prediction"] = drop_index self.data["filter_drop_index_prediction"] = drop_index
filtered_df.fillna(0, inplace=True) filtered_df.fillna(0, inplace=True)
# replacing all NaNs with zeros to avoid issues in 'prediction', but any prediction # replacing all NaNs with zeros to avoid issues in 'prediction', but any prediction
@ -808,7 +802,7 @@ class FreqaiDataKitchen:
:, :no_prev_pts :, :no_prev_pts
] ]
distances = distances.replace([np.inf, -np.inf], np.nan) distances = distances.replace([np.inf, -np.inf], np.nan)
drop_index = pd.isnull(distances).any(1) drop_index = pd.isnull(distances).any(axis=1)
distances = distances[drop_index == 0] distances = distances[drop_index == 0]
inliers = pd.DataFrame(index=distances.index) inliers = pd.DataFrame(index=distances.index)
@ -881,6 +875,7 @@ class FreqaiDataKitchen:
""" """
column_names = dataframe.columns column_names = dataframe.columns
features = [c for c in column_names if "%" in c] features = [c for c in column_names if "%" in c]
if not features: if not features:
raise OperationalException("Could not find any features!") raise OperationalException("Could not find any features!")

View File

@ -275,7 +275,8 @@ class IFreqaiModel(ABC):
if dk.check_if_backtest_prediction_exists(): if dk.check_if_backtest_prediction_exists():
self.dd.load_metadata(dk) self.dd.load_metadata(dk)
self.check_if_feature_list_matches_strategy(dataframe_train, dk) dk.find_features(dataframe_train)
self.check_if_feature_list_matches_strategy(dk)
append_df = dk.get_backtesting_prediction() append_df = dk.get_backtesting_prediction()
dk.append_predictions(append_df) dk.append_predictions(append_df)
else: else:
@ -296,7 +297,6 @@ class IFreqaiModel(ABC):
else: else:
self.model = self.dd.load_data(pair, dk) self.model = self.dd.load_data(pair, dk)
# self.check_if_feature_list_matches_strategy(dataframe_train, dk)
pred_df, do_preds = self.predict(dataframe_backtest, dk) pred_df, do_preds = self.predict(dataframe_backtest, dk)
append_df = dk.get_predictions_to_append(pred_df, do_preds) append_df = dk.get_predictions_to_append(pred_df, do_preds)
dk.append_predictions(append_df) dk.append_predictions(append_df)
@ -420,7 +420,7 @@ class IFreqaiModel(ABC):
return return
def check_if_feature_list_matches_strategy( def check_if_feature_list_matches_strategy(
self, dataframe: DataFrame, dk: FreqaiDataKitchen self, dk: FreqaiDataKitchen
) -> None: ) -> None:
""" """
Ensure user is passing the proper feature set if they are reusing an `identifier` pointing Ensure user is passing the proper feature set if they are reusing an `identifier` pointing
@ -429,11 +429,12 @@ class IFreqaiModel(ABC):
:param dk: FreqaiDataKitchen = non-persistent data container/analyzer for :param dk: FreqaiDataKitchen = non-persistent data container/analyzer for
current coin/bot loop current coin/bot loop
""" """
dk.find_features(dataframe)
if "training_features_list_raw" in dk.data: if "training_features_list_raw" in dk.data:
feature_list = dk.data["training_features_list_raw"] feature_list = dk.data["training_features_list_raw"]
else: else:
feature_list = dk.data['training_features_list'] feature_list = dk.data['training_features_list']
if dk.training_features_list != feature_list: if dk.training_features_list != feature_list:
raise OperationalException( raise OperationalException(
"Trying to access pretrained model with `identifier` " "Trying to access pretrained model with `identifier` "
@ -481,13 +482,16 @@ class IFreqaiModel(ABC):
if self.freqai_info["feature_parameters"].get('noise_standard_deviation', 0): if self.freqai_info["feature_parameters"].get('noise_standard_deviation', 0):
dk.add_noise_to_training_features() dk.add_noise_to_training_features()
def data_cleaning_predict(self, dk: FreqaiDataKitchen, dataframe: DataFrame) -> None: def data_cleaning_predict(self, dk: FreqaiDataKitchen) -> None:
""" """
Base data cleaning method for predict. Base data cleaning method for predict.
Functions here are complementary to the functions of data_cleaning_train. Functions here are complementary to the functions of data_cleaning_train.
""" """
ft_params = self.freqai_info["feature_parameters"] ft_params = self.freqai_info["feature_parameters"]
# ensure user is feeding the correct indicators to the model
self.check_if_feature_list_matches_strategy(dk)
if ft_params.get('inlier_metric_window', 0): if ft_params.get('inlier_metric_window', 0):
dk.compute_inlier_metric(set_='predict') dk.compute_inlier_metric(set_='predict')
@ -505,9 +509,6 @@ class IFreqaiModel(ABC):
if ft_params.get("use_DBSCAN_to_remove_outliers", False): if ft_params.get("use_DBSCAN_to_remove_outliers", False):
dk.use_DBSCAN_to_remove_outliers(predict=True) dk.use_DBSCAN_to_remove_outliers(predict=True)
# ensure user is feeding the correct indicators to the model
self.check_if_feature_list_matches_strategy(dk.data_dictionary['prediction_features'], dk)
def model_exists(self, dk: FreqaiDataKitchen) -> bool: def model_exists(self, dk: FreqaiDataKitchen) -> bool:
""" """
Given a pair and path, check if a model already exists Given a pair and path, check if a model already exists

View File

@ -82,7 +82,10 @@ class FreqtradeBot(LoggingMixin):
# Keep this at the end of this initialization method. # Keep this at the end of this initialization method.
self.rpc: RPCManager = RPCManager(self) self.rpc: RPCManager = RPCManager(self)
self.dataprovider = DataProvider(self.config, self.exchange, self.pairlists, self.rpc) self.dataprovider = DataProvider(self.config, self.exchange, rpc=self.rpc)
self.pairlists = PairListManager(self.exchange, self.config, self.dataprovider)
self.dataprovider.add_pairlisthandler(self.pairlists)
# Attach Dataprovider to strategy instance # Attach Dataprovider to strategy instance
self.strategy.dp = self.dataprovider self.strategy.dp = self.dataprovider
@ -597,7 +600,7 @@ class FreqtradeBot(LoggingMixin):
# We should decrease our position # We should decrease our position
amount = self.exchange.amount_to_contract_precision( amount = self.exchange.amount_to_contract_precision(
trade.pair, trade.pair,
abs(float(FtPrecise(stake_amount) / FtPrecise(current_exit_rate)))) abs(float(FtPrecise(stake_amount * trade.leverage) / FtPrecise(current_exit_rate))))
if amount > trade.amount: if amount > trade.amount:
# This is currently ineffective as remaining would become < min tradable # This is currently ineffective as remaining would become < min tradable
# Fixing this would require checking for 0.0 there - # Fixing this would require checking for 0.0 there -
@ -1340,11 +1343,12 @@ class FreqtradeBot(LoggingMixin):
replacing: Optional[bool] = False replacing: Optional[bool] = False
) -> bool: ) -> bool:
""" """
Buy cancel - cancel order entry cancel - cancel order
:param replacing: Replacing order - prevent trade deletion. :param replacing: Replacing order - prevent trade deletion.
:return: True if order was fully cancelled :return: True if trade was fully cancelled
""" """
was_trade_fully_canceled = False was_trade_fully_canceled = False
side = trade.entry_side.capitalize()
# Cancelled orders may have the status of 'canceled' or 'closed' # Cancelled orders may have the status of 'canceled' or 'closed'
if order['status'] not in constants.NON_OPEN_EXCHANGE_STATES: if order['status'] not in constants.NON_OPEN_EXCHANGE_STATES:
@ -1371,7 +1375,6 @@ class FreqtradeBot(LoggingMixin):
corder = order corder = order
reason = constants.CANCEL_REASON['CANCELLED_ON_EXCHANGE'] reason = constants.CANCEL_REASON['CANCELLED_ON_EXCHANGE']
side = trade.entry_side.capitalize()
logger.info('%s order %s for %s.', side, reason, trade) logger.info('%s order %s for %s.', side, reason, trade)
# Using filled to determine the filled amount # Using filled to determine the filled amount
@ -1385,24 +1388,13 @@ class FreqtradeBot(LoggingMixin):
was_trade_fully_canceled = True was_trade_fully_canceled = True
reason += f", {constants.CANCEL_REASON['FULLY_CANCELLED']}" reason += f", {constants.CANCEL_REASON['FULLY_CANCELLED']}"
else: else:
# FIXME TODO: This could possibly reworked to not duplicate the code 15 lines below.
self.update_trade_state(trade, trade.open_order_id, corder) self.update_trade_state(trade, trade.open_order_id, corder)
trade.open_order_id = None
logger.info(f'{side} Order timeout for {trade}.') logger.info(f'{side} Order timeout for {trade}.')
else: else:
# if trade is partially complete, edit the stake details for the trade # update_trade_state (and subsequently recalc_trade_from_orders) will handle updates
# and close the order # to the trade object
# cancel_order may not contain the full order dict, so we need to fallback
# to the order dict acquired before cancelling.
# we need to fall back to the values from order if corder does not contain these keys.
trade.amount = filled_amount
# * Check edge cases, we don't want to make leverage > 1.0 if we don't have to
# * (for leverage modes which aren't isolated futures)
trade.stake_amount = trade.amount * trade.open_rate / trade.leverage
self.update_trade_state(trade, trade.open_order_id, corder) self.update_trade_state(trade, trade.open_order_id, corder)
trade.open_order_id = None
logger.info(f'Partial {trade.entry_side} order timeout for {trade}.') logger.info(f'Partial {trade.entry_side} order timeout for {trade}.')
reason += f", {constants.CANCEL_REASON['PARTIALLY_FILLED']}" reason += f", {constants.CANCEL_REASON['PARTIALLY_FILLED']}"
@ -1439,8 +1431,6 @@ class FreqtradeBot(LoggingMixin):
trade.close_rate_requested = None trade.close_rate_requested = None
trade.close_profit = None trade.close_profit = None
trade.close_profit_abs = None trade.close_profit_abs = None
trade.close_date = None
trade.is_open = True
trade.open_order_id = None trade.open_order_id = None
trade.exit_reason = None trade.exit_reason = None
cancelled = True cancelled = True
@ -1700,11 +1690,6 @@ class FreqtradeBot(LoggingMixin):
'stake_amount': trade.stake_amount, 'stake_amount': trade.stake_amount,
} }
if 'fiat_display_currency' in self.config:
msg.update({
'fiat_currency': self.config['fiat_display_currency'],
})
# Send the message # Send the message
self.rpc.send_msg(msg) self.rpc.send_msg(msg)

View File

@ -110,7 +110,7 @@ class Backtesting:
self.timeframe = str(self.config.get('timeframe')) self.timeframe = str(self.config.get('timeframe'))
self.timeframe_min = timeframe_to_minutes(self.timeframe) self.timeframe_min = timeframe_to_minutes(self.timeframe)
self.init_backtest_detail() self.init_backtest_detail()
self.pairlists = PairListManager(self.exchange, self.config) self.pairlists = PairListManager(self.exchange, self.config, self.dataprovider)
if 'VolumePairList' in self.pairlists.name_list: if 'VolumePairList' in self.pairlists.name_list:
raise OperationalException("VolumePairList not allowed for backtesting. " raise OperationalException("VolumePairList not allowed for backtesting. "
"Please use StaticPairList instead.") "Please use StaticPairList instead.")
@ -540,7 +540,7 @@ class Backtesting:
if stake_amount is not None and stake_amount < 0.0: if stake_amount is not None and stake_amount < 0.0:
amount = amount_to_contract_precision( amount = amount_to_contract_precision(
abs(stake_amount) / current_rate, trade.amount_precision, abs(stake_amount * trade.leverage) / current_rate, trade.amount_precision,
self.precision_mode, trade.contract_size) self.precision_mode, trade.contract_size)
if amount == 0.0: if amount == 0.0:
return trade return trade

View File

@ -0,0 +1,90 @@
"""
External Pair List provider
Provides pair list from Leader data
"""
import logging
from typing import Any, Dict, List, Optional
from freqtrade.exceptions import OperationalException
from freqtrade.plugins.pairlist.IPairList import IPairList
logger = logging.getLogger(__name__)
class ProducerPairList(IPairList):
"""
PairList plugin for use with external_message_consumer.
Will use pairs given from leader data.
Usage:
"pairlists": [
{
"method": "ProducerPairList",
"number_assets": 5,
"producer_name": "default",
}
],
"""
def __init__(self, exchange, pairlistmanager,
config: Dict[str, Any], pairlistconfig: Dict[str, Any],
pairlist_pos: int) -> None:
super().__init__(exchange, pairlistmanager, config, pairlistconfig, pairlist_pos)
self._num_assets: int = self._pairlistconfig.get('number_assets', 0)
self._producer_name = self._pairlistconfig.get('producer_name', 'default')
if not config.get('external_message_consumer', {}).get('enabled'):
raise OperationalException(
"ProducerPairList requires external_message_consumer to be enabled.")
@property
def needstickers(self) -> bool:
"""
Boolean property defining if tickers are necessary.
If no Pairlist requires tickers, an empty Dict is passed
as tickers argument to filter_pairlist
"""
return False
def short_desc(self) -> str:
"""
Short whitelist method description - used for startup-messages
-> Please overwrite in subclasses
"""
return f"{self.name} - {self._producer_name}"
def _filter_pairlist(self, pairlist: Optional[List[str]]):
upstream_pairlist = self._pairlistmanager._dataprovider.get_producer_pairs(
self._producer_name)
if pairlist is None:
pairlist = self._pairlistmanager._dataprovider.get_producer_pairs(self._producer_name)
pairs = list(dict.fromkeys(pairlist + upstream_pairlist))
if self._num_assets:
pairs = pairs[:self._num_assets]
return pairs
def gen_pairlist(self, tickers: Dict) -> List[str]:
"""
Generate the pairlist
:param tickers: Tickers (from exchange.get_tickers()). May be cached.
:return: List of pairs
"""
pairs = self._filter_pairlist(None)
self.log_once(f"Received pairs: {pairs}", logger.debug)
pairs = self._whitelist_for_active_markets(self.verify_whitelist(pairs, logger.info))
return pairs
def filter_pairlist(self, pairlist: List[str], tickers: Dict) -> List[str]:
"""
Filters and sorts pairlist and returns the whitelist again.
Called on each bot iteration - please use internal caching if necessary
:param pairlist: pairlist to filter or sort
:param tickers: Tickers (from exchange.get_tickers()). May be cached.
:return: new whitelist
"""
return self._filter_pairlist(pairlist)

View File

@ -232,6 +232,4 @@ class VolumePairList(IPairList):
# Limit pairlist to the requested number of pairs # Limit pairlist to the requested number of pairs
pairs = pairs[:self._number_pairs] pairs = pairs[:self._number_pairs]
self.log_once(f"Searching {self._number_pairs} pairs: {pairs}", logger.info)
return pairs return pairs

View File

@ -3,11 +3,12 @@ PairList manager class
""" """
import logging import logging
from functools import partial from functools import partial
from typing import Dict, List from typing import Dict, List, Optional
from cachetools import TTLCache, cached from cachetools import TTLCache, cached
from freqtrade.constants import Config, ListPairsWithTimeframes from freqtrade.constants import Config, ListPairsWithTimeframes
from freqtrade.data.dataprovider import DataProvider
from freqtrade.enums import CandleType from freqtrade.enums import CandleType
from freqtrade.exceptions import OperationalException from freqtrade.exceptions import OperationalException
from freqtrade.mixins import LoggingMixin from freqtrade.mixins import LoggingMixin
@ -21,13 +22,14 @@ logger = logging.getLogger(__name__)
class PairListManager(LoggingMixin): class PairListManager(LoggingMixin):
def __init__(self, exchange, config: Config) -> None: def __init__(self, exchange, config: Config, dataprovider: DataProvider = None) -> None:
self._exchange = exchange self._exchange = exchange
self._config = config self._config = config
self._whitelist = self._config['exchange'].get('pair_whitelist') self._whitelist = self._config['exchange'].get('pair_whitelist')
self._blacklist = self._config['exchange'].get('pair_blacklist', []) self._blacklist = self._config['exchange'].get('pair_blacklist', [])
self._pairlist_handlers: List[IPairList] = [] self._pairlist_handlers: List[IPairList] = []
self._tickers_needed = False self._tickers_needed = False
self._dataprovider: Optional[DataProvider] = dataprovider
for pairlist_handler_config in self._config.get('pairlists', []): for pairlist_handler_config in self._config.get('pairlists', []):
pairlist_handler = PairListResolver.load_pairlist( pairlist_handler = PairListResolver.load_pairlist(
pairlist_handler_config['method'], pairlist_handler_config['method'],
@ -96,6 +98,8 @@ class PairListManager(LoggingMixin):
# to ensure blacklist is respected. # to ensure blacklist is respected.
pairlist = self.verify_blacklist(pairlist, logger.warning) pairlist = self.verify_blacklist(pairlist, logger.warning)
self.log_once(f"Whitelist with {len(pairlist)} pairs: {pairlist}", logger.info)
self._whitelist = pairlist self._whitelist = pairlist
def verify_blacklist(self, pairlist: List[str], logmethod) -> List[str]: def verify_blacklist(self, pairlist: List[str], logmethod) -> List[str]:

View File

@ -198,8 +198,10 @@ class ApiServer(RPCHandler):
logger.debug(f"Found message of type: {message.get('type')}") logger.debug(f"Found message of type: {message.get('type')}")
# Broadcast it # Broadcast it
await self._ws_channel_manager.broadcast(message) await self._ws_channel_manager.broadcast(message)
# Sleep, make this configurable? # Limit messages per sec.
await asyncio.sleep(0.1) # Could cause problems with queue size if too low, and
# problems with network traffik if too high.
await asyncio.sleep(0.001)
except asyncio.CancelledError: except asyncio.CancelledError:
pass pass

View File

@ -30,9 +30,9 @@ class Discord(Webhook):
pass pass
def send_msg(self, msg) -> None: def send_msg(self, msg) -> None:
logger.info(f"Sending discord message: {msg}")
if msg['type'].value in self.config['discord']: if msg['type'].value in self.config['discord']:
logger.info(f"Sending discord message: {msg}")
msg['strategy'] = self.strategy msg['strategy'] = self.strategy
msg['timeframe'] = self.timeframe msg['timeframe'] = self.timeframe

View File

@ -61,6 +61,14 @@ class Webhook(RPCHandler):
RPCMessageType.STARTUP, RPCMessageType.STARTUP,
RPCMessageType.WARNING): RPCMessageType.WARNING):
valuedict = whconfig.get('webhookstatus') valuedict = whconfig.get('webhookstatus')
elif msg['type'] in (
RPCMessageType.PROTECTION_TRIGGER,
RPCMessageType.PROTECTION_TRIGGER_GLOBAL,
RPCMessageType.WHITELIST,
RPCMessageType.ANALYZED_DF,
RPCMessageType.STRATEGY_MSG):
# Don't fail for non-implemented types
return
else: else:
raise NotImplementedError('Unknown message type: {}'.format(msg['type'])) raise NotImplementedError('Unknown message type: {}'.format(msg['type']))
if not valuedict: if not valuedict:

View File

@ -72,7 +72,7 @@ setup(
'pandas', 'pandas',
'tables', 'tables',
'blosc', 'blosc',
'joblib', 'joblib>=1.2.0',
'pyarrow; platform_machine != "armv7l"', 'pyarrow; platform_machine != "armv7l"',
'fastapi', 'fastapi',
'uvicorn', 'uvicorn',

View File

@ -200,6 +200,8 @@ def patch_freqtradebot(mocker, config) -> None:
mocker.patch('freqtrade.freqtradebot.RPCManager._init', MagicMock()) mocker.patch('freqtrade.freqtradebot.RPCManager._init', MagicMock())
mocker.patch('freqtrade.freqtradebot.RPCManager.send_msg', MagicMock()) mocker.patch('freqtrade.freqtradebot.RPCManager.send_msg', MagicMock())
patch_whitelist(mocker, config) patch_whitelist(mocker, config)
mocker.patch('freqtrade.freqtradebot.ExternalMessageConsumer')
mocker.patch('freqtrade.configuration.config_validation._validate_consumers')
def get_patched_freqtradebot(mocker, config) -> FreqtradeBot: def get_patched_freqtradebot(mocker, config) -> FreqtradeBot:

View File

@ -235,7 +235,7 @@ def test_calculate_market_change(testdatadir):
data = load_data(datadir=testdatadir, pairs=pairs, timeframe='5m') data = load_data(datadir=testdatadir, pairs=pairs, timeframe='5m')
result = calculate_market_change(data) result = calculate_market_change(data)
assert isinstance(result, float) assert isinstance(result, float)
assert pytest.approx(result) == 0.00955514 assert pytest.approx(result) == 0.01100002
def test_combine_dataframes_with_mean(testdatadir): def test_combine_dataframes_with_mean(testdatadir):

View File

@ -139,10 +139,10 @@ def test_jsondatahandler_ohlcv_purge(mocker, testdatadir):
def test_jsondatahandler_ohlcv_load(testdatadir, caplog): def test_jsondatahandler_ohlcv_load(testdatadir, caplog):
dh = JsonDataHandler(testdatadir) dh = JsonDataHandler(testdatadir)
df = dh.ohlcv_load('XRP/ETH', '5m', 'spot') df = dh.ohlcv_load('XRP/ETH', '5m', 'spot')
assert len(df) == 711 assert len(df) == 712
df_mark = dh.ohlcv_load('UNITTEST/USDT', '1h', candle_type="mark") df_mark = dh.ohlcv_load('UNITTEST/USDT', '1h', candle_type="mark")
assert len(df_mark) == 99 assert len(df_mark) == 100
df_no_mark = dh.ohlcv_load('UNITTEST/USDT', '1h', 'spot') df_no_mark = dh.ohlcv_load('UNITTEST/USDT', '1h', 'spot')
assert len(df_no_mark) == 0 assert len(df_no_mark) == 0

View File

@ -124,8 +124,8 @@ def test_backtest_analysis_nomock(default_conf, mocker, caplog, testdatadir, tmp
assert '0' in captured.out assert '0' in captured.out
assert '0.01616' in captured.out assert '0.01616' in captured.out
assert '34.049' in captured.out assert '34.049' in captured.out
assert '0.104104' in captured.out assert '0.104411' in captured.out
assert '47.0996' in captured.out assert '52.8292' in captured.out
# test group 1 # test group 1
args = get_args(base_args + ['--analysis-groups', "1"]) args = get_args(base_args + ['--analysis-groups', "1"])

View File

@ -377,8 +377,8 @@ def test_load_partial_missing(testdatadir, caplog) -> None:
td = ((end - start).total_seconds() // 60 // 5) + 1 td = ((end - start).total_seconds() // 60 // 5) + 1
assert td != len(data['UNITTEST/BTC']) assert td != len(data['UNITTEST/BTC'])
# Shift endtime with +5 - as last candle is dropped (partial candle) # Shift endtime with +5
end_real = arrow.get(data['UNITTEST/BTC'].iloc[-1, 0]).shift(minutes=5) end_real = arrow.get(data['UNITTEST/BTC'].iloc[-1, 0])
assert log_has(f'UNITTEST/BTC, spot, 5m, ' assert log_has(f'UNITTEST/BTC, spot, 5m, '
f'data ends at {end_real.strftime(DATETIME_PRINT_FORMAT)}', f'data ends at {end_real.strftime(DATETIME_PRINT_FORMAT)}',
caplog) caplog)
@ -447,7 +447,7 @@ def test_get_timerange(default_conf, mocker, testdatadir) -> None:
) )
min_date, max_date = get_timerange(data) min_date, max_date = get_timerange(data)
assert min_date.isoformat() == '2017-11-04T23:02:00+00:00' assert min_date.isoformat() == '2017-11-04T23:02:00+00:00'
assert max_date.isoformat() == '2017-11-14T22:58:00+00:00' assert max_date.isoformat() == '2017-11-14T22:59:00+00:00'
def test_validate_backtest_data_warn(default_conf, mocker, caplog, testdatadir) -> None: def test_validate_backtest_data_warn(default_conf, mocker, caplog, testdatadir) -> None:
@ -470,7 +470,7 @@ def test_validate_backtest_data_warn(default_conf, mocker, caplog, testdatadir)
min_date, max_date, timeframe_to_minutes('1m')) min_date, max_date, timeframe_to_minutes('1m'))
assert len(caplog.record_tuples) == 1 assert len(caplog.record_tuples) == 1
assert log_has( assert log_has(
"UNITTEST/BTC has missing frames: expected 14396, got 13680, that's 716 missing values", "UNITTEST/BTC has missing frames: expected 14397, got 13681, that's 716 missing values",
caplog) caplog)

View File

@ -501,6 +501,24 @@ def test_fill_leverage_tiers_binance_dryrun(default_conf, mocker, leverage_tiers
assert len(v) == len(value) assert len(v) == len(value)
def test_additional_exchange_init_binance(default_conf, mocker):
api_mock = MagicMock()
api_mock.fapiPrivateGetPositionsideDual = MagicMock(return_value={"dualSidePosition": True})
api_mock.fapiPrivateGetMultiAssetsMargin = MagicMock(return_value={"multiAssetsMargin": True})
default_conf['dry_run'] = False
default_conf['trading_mode'] = TradingMode.FUTURES
default_conf['margin_mode'] = MarginMode.ISOLATED
with pytest.raises(OperationalException,
match=r"Hedge Mode is not supported.*\nMulti-Asset Mode is not supported.*"):
get_patched_exchange(mocker, default_conf, id="binance", api_mock=api_mock)
api_mock.fapiPrivateGetPositionsideDual = MagicMock(return_value={"dualSidePosition": False})
api_mock.fapiPrivateGetMultiAssetsMargin = MagicMock(return_value={"multiAssetsMargin": False})
exchange = get_patched_exchange(mocker, default_conf, id="binance", api_mock=api_mock)
assert exchange
ccxt_exceptionhandlers(mocker, default_conf, api_mock, 'binance',
"additional_exchange_init", "fapiPrivateGetPositionsideDual")
def test__set_leverage_binance(mocker, default_conf): def test__set_leverage_binance(mocker, default_conf):
api_mock = MagicMock() api_mock = MagicMock()

View File

@ -137,6 +137,7 @@ def exchange_futures(request, exchange_conf, class_mocker):
'freqtrade.exchange.binance.Binance.fill_leverage_tiers') 'freqtrade.exchange.binance.Binance.fill_leverage_tiers')
class_mocker.patch('freqtrade.exchange.exchange.Exchange.fetch_trading_fees') class_mocker.patch('freqtrade.exchange.exchange.Exchange.fetch_trading_fees')
class_mocker.patch('freqtrade.exchange.okx.Okx.additional_exchange_init') class_mocker.patch('freqtrade.exchange.okx.Okx.additional_exchange_init')
class_mocker.patch('freqtrade.exchange.binance.Binance.additional_exchange_init')
class_mocker.patch('freqtrade.exchange.exchange.Exchange.load_cached_leverage_tiers', class_mocker.patch('freqtrade.exchange.exchange.Exchange.load_cached_leverage_tiers',
return_value=None) return_value=None)
class_mocker.patch('freqtrade.exchange.exchange.Exchange.cache_leverage_tiers') class_mocker.patch('freqtrade.exchange.exchange.Exchange.cache_leverage_tiers')

View File

@ -86,7 +86,7 @@ def test_use_SVM_to_remove_outliers_and_outlier_protection(mocker, freqai_conf,
freqai_conf['freqai']['feature_parameters'].update({"outlier_protection_percentage": 0.1}) freqai_conf['freqai']['feature_parameters'].update({"outlier_protection_percentage": 0.1})
freqai.dk.use_SVM_to_remove_outliers(predict=False) freqai.dk.use_SVM_to_remove_outliers(predict=False)
assert log_has_re( assert log_has_re(
"SVM detected 8.09%", "SVM detected 8.66%",
caplog, caplog,
) )

View File

@ -80,7 +80,7 @@ def load_data_test(what, testdatadir):
data.loc[:, 'close'] = np.sin(data.index * hz) / 1000 + base data.loc[:, 'close'] = np.sin(data.index * hz) / 1000 + base
return {'UNITTEST/BTC': clean_ohlcv_dataframe(data, timeframe='1m', pair='UNITTEST/BTC', return {'UNITTEST/BTC': clean_ohlcv_dataframe(data, timeframe='1m', pair='UNITTEST/BTC',
fill_missing=True)} fill_missing=True, drop_incomplete=True)}
# FIX: fixturize this? # FIX: fixturize this?
@ -323,7 +323,7 @@ def test_data_to_dataframe_bt(default_conf, mocker, testdatadir) -> None:
backtesting = Backtesting(default_conf) backtesting = Backtesting(default_conf)
backtesting._set_strategy(backtesting.strategylist[0]) backtesting._set_strategy(backtesting.strategylist[0])
processed = backtesting.strategy.advise_all_indicators(data) processed = backtesting.strategy.advise_all_indicators(data)
assert len(processed['UNITTEST/BTC']) == 102 assert len(processed['UNITTEST/BTC']) == 103
# Load strategy to compare the result between Backtesting function and strategy are the same # Load strategy to compare the result between Backtesting function and strategy are the same
strategy = StrategyResolver.load_strategy(default_conf) strategy = StrategyResolver.load_strategy(default_conf)
@ -1165,9 +1165,9 @@ def test_backtest_start_timerange(default_conf, mocker, caplog, testdatadir):
'Parameter --timerange detected: 1510694220-1510700340 ...', 'Parameter --timerange detected: 1510694220-1510700340 ...',
f'Using data directory: {testdatadir} ...', f'Using data directory: {testdatadir} ...',
'Loading data from 2017-11-14 20:57:00 ' 'Loading data from 2017-11-14 20:57:00 '
'up to 2017-11-14 22:58:00 (0 days).', 'up to 2017-11-14 22:59:00 (0 days).',
'Backtesting with data from 2017-11-14 21:17:00 ' 'Backtesting with data from 2017-11-14 21:17:00 '
'up to 2017-11-14 22:58:00 (0 days).', 'up to 2017-11-14 22:59:00 (0 days).',
'Parameter --enable-position-stacking detected ...' 'Parameter --enable-position-stacking detected ...'
] ]
@ -1244,9 +1244,9 @@ def test_backtest_start_multi_strat(default_conf, mocker, caplog, testdatadir):
'Parameter --timerange detected: 1510694220-1510700340 ...', 'Parameter --timerange detected: 1510694220-1510700340 ...',
f'Using data directory: {testdatadir} ...', f'Using data directory: {testdatadir} ...',
'Loading data from 2017-11-14 20:57:00 ' 'Loading data from 2017-11-14 20:57:00 '
'up to 2017-11-14 22:58:00 (0 days).', 'up to 2017-11-14 22:59:00 (0 days).',
'Backtesting with data from 2017-11-14 21:17:00 ' 'Backtesting with data from 2017-11-14 21:17:00 '
'up to 2017-11-14 22:58:00 (0 days).', 'up to 2017-11-14 22:59:00 (0 days).',
'Parameter --enable-position-stacking detected ...', 'Parameter --enable-position-stacking detected ...',
f'Running backtesting for Strategy {CURRENT_TEST_STRATEGY}', f'Running backtesting for Strategy {CURRENT_TEST_STRATEGY}',
'Running backtesting for Strategy StrategyTestV2', 'Running backtesting for Strategy StrategyTestV2',
@ -1355,9 +1355,9 @@ def test_backtest_start_multi_strat_nomock(default_conf, mocker, caplog, testdat
'Parameter --timerange detected: 1510694220-1510700340 ...', 'Parameter --timerange detected: 1510694220-1510700340 ...',
f'Using data directory: {testdatadir} ...', f'Using data directory: {testdatadir} ...',
'Loading data from 2017-11-14 20:57:00 ' 'Loading data from 2017-11-14 20:57:00 '
'up to 2017-11-14 22:58:00 (0 days).', 'up to 2017-11-14 22:59:00 (0 days).',
'Backtesting with data from 2017-11-14 21:17:00 ' 'Backtesting with data from 2017-11-14 21:17:00 '
'up to 2017-11-14 22:58:00 (0 days).', 'up to 2017-11-14 22:59:00 (0 days).',
'Parameter --enable-position-stacking detected ...', 'Parameter --enable-position-stacking detected ...',
f'Running backtesting for Strategy {CURRENT_TEST_STRATEGY}', f'Running backtesting for Strategy {CURRENT_TEST_STRATEGY}',
'Running backtesting for Strategy StrategyTestV2', 'Running backtesting for Strategy StrategyTestV2',
@ -1371,7 +1371,7 @@ def test_backtest_start_multi_strat_nomock(default_conf, mocker, caplog, testdat
assert 'EXIT REASON STATS' in captured.out assert 'EXIT REASON STATS' in captured.out
assert 'DAY BREAKDOWN' in captured.out assert 'DAY BREAKDOWN' in captured.out
assert 'LEFT OPEN TRADES REPORT' in captured.out assert 'LEFT OPEN TRADES REPORT' in captured.out
assert '2017-11-14 21:17:00 -> 2017-11-14 22:58:00 | Max open trades : 1' in captured.out assert '2017-11-14 21:17:00 -> 2017-11-14 22:59:00 | Max open trades : 1' in captured.out
assert 'STRATEGY SUMMARY' in captured.out assert 'STRATEGY SUMMARY' in captured.out
@ -1503,9 +1503,9 @@ def test_backtest_start_nomock_futures(default_conf_usdt, mocker,
'Parameter -i/--timeframe detected ... Using timeframe: 1h ...', 'Parameter -i/--timeframe detected ... Using timeframe: 1h ...',
f'Using data directory: {testdatadir} ...', f'Using data directory: {testdatadir} ...',
'Loading data from 2021-11-17 01:00:00 ' 'Loading data from 2021-11-17 01:00:00 '
'up to 2021-11-21 03:00:00 (4 days).', 'up to 2021-11-21 04:00:00 (4 days).',
'Backtesting with data from 2021-11-17 21:00:00 ' 'Backtesting with data from 2021-11-17 21:00:00 '
'up to 2021-11-21 03:00:00 (3 days).', 'up to 2021-11-21 04:00:00 (3 days).',
'XRP/USDT, funding_rate, 8h, data starts at 2021-11-18 00:00:00', 'XRP/USDT, funding_rate, 8h, data starts at 2021-11-18 00:00:00',
'XRP/USDT, mark, 8h, data starts at 2021-11-18 00:00:00', 'XRP/USDT, mark, 8h, data starts at 2021-11-18 00:00:00',
f'Running backtesting for Strategy {CURRENT_TEST_STRATEGY}', f'Running backtesting for Strategy {CURRENT_TEST_STRATEGY}',
@ -1616,9 +1616,9 @@ def test_backtest_start_multi_strat_nomock_detail(default_conf, mocker,
'Parameter --timeframe-detail detected, using 1m for intra-candle backtesting ...', 'Parameter --timeframe-detail detected, using 1m for intra-candle backtesting ...',
f'Using data directory: {testdatadir} ...', f'Using data directory: {testdatadir} ...',
'Loading data from 2019-10-11 00:00:00 ' 'Loading data from 2019-10-11 00:00:00 '
'up to 2019-10-13 11:10:00 (2 days).', 'up to 2019-10-13 11:15:00 (2 days).',
'Backtesting with data from 2019-10-11 01:40:00 ' 'Backtesting with data from 2019-10-11 01:40:00 '
'up to 2019-10-13 11:10:00 (2 days).', 'up to 2019-10-13 11:15:00 (2 days).',
f'Running backtesting for Strategy {CURRENT_TEST_STRATEGY}', f'Running backtesting for Strategy {CURRENT_TEST_STRATEGY}',
] ]
@ -1719,7 +1719,7 @@ def test_backtest_start_multi_strat_caching(default_conf, mocker, caplog, testda
'Parameter --timerange detected: 1510694220-1510700340 ...', 'Parameter --timerange detected: 1510694220-1510700340 ...',
f'Using data directory: {testdatadir} ...', f'Using data directory: {testdatadir} ...',
'Loading data from 2017-11-14 20:57:00 ' 'Loading data from 2017-11-14 20:57:00 '
'up to 2017-11-14 22:58:00 (0 days).', 'up to 2017-11-14 22:59:00 (0 days).',
'Parameter --enable-position-stacking detected ...', 'Parameter --enable-position-stacking detected ...',
] ]
@ -1732,7 +1732,7 @@ def test_backtest_start_multi_strat_caching(default_conf, mocker, caplog, testda
'Running backtesting for Strategy StrategyTestV2', 'Running backtesting for Strategy StrategyTestV2',
'Running backtesting for Strategy StrategyTestV3', 'Running backtesting for Strategy StrategyTestV3',
'Ignoring max_open_trades (--disable-max-market-positions was used) ...', 'Ignoring max_open_trades (--disable-max-market-positions was used) ...',
'Backtesting with data from 2017-11-14 21:17:00 up to 2017-11-14 22:58:00 (0 days).', 'Backtesting with data from 2017-11-14 21:17:00 up to 2017-11-14 22:59:00 (0 days).',
] ]
elif run_id == '2' and min_backtest_date < start_time: elif run_id == '2' and min_backtest_date < start_time:
assert backtestmock.call_count == 0 assert backtestmock.call_count == 0
@ -1745,7 +1745,7 @@ def test_backtest_start_multi_strat_caching(default_conf, mocker, caplog, testda
'Reusing result of previous backtest for StrategyTestV2', 'Reusing result of previous backtest for StrategyTestV2',
'Running backtesting for Strategy StrategyTestV3', 'Running backtesting for Strategy StrategyTestV3',
'Ignoring max_open_trades (--disable-max-market-positions was used) ...', 'Ignoring max_open_trades (--disable-max-market-positions was used) ...',
'Backtesting with data from 2017-11-14 21:17:00 up to 2017-11-14 22:58:00 (0 days).', 'Backtesting with data from 2017-11-14 21:17:00 up to 2017-11-14 22:59:00 (0 days).',
] ]
assert backtestmock.call_count == 1 assert backtestmock.call_count == 1

View File

@ -93,11 +93,16 @@ def test_backtest_position_adjustment(default_conf, fee, mocker, testdatadir) ->
t["close_rate"], 6) < round(ln.iloc[0]["high"], 6)) t["close_rate"], 6) < round(ln.iloc[0]["high"], 6))
def test_backtest_position_adjustment_detailed(default_conf, fee, mocker) -> None: @pytest.mark.parametrize('leverage', [
1, 2
])
def test_backtest_position_adjustment_detailed(default_conf, fee, mocker, leverage) -> None:
default_conf['use_exit_signal'] = False default_conf['use_exit_signal'] = False
mocker.patch('freqtrade.exchange.Exchange.get_fee', fee) mocker.patch('freqtrade.exchange.Exchange.get_fee', fee)
mocker.patch("freqtrade.exchange.Exchange.get_min_pair_stake_amount", return_value=10) mocker.patch("freqtrade.exchange.Exchange.get_min_pair_stake_amount", return_value=10)
mocker.patch("freqtrade.exchange.Exchange.get_max_pair_stake_amount", return_value=float('inf')) mocker.patch("freqtrade.exchange.Exchange.get_max_pair_stake_amount", return_value=float('inf'))
mocker.patch("freqtrade.exchange.Exchange.get_max_leverage", return_value=10)
patch_exchange(mocker) patch_exchange(mocker)
default_conf.update({ default_conf.update({
"stake_amount": 100.0, "stake_amount": 100.0,
@ -105,6 +110,7 @@ def test_backtest_position_adjustment_detailed(default_conf, fee, mocker) -> Non
"strategy": "StrategyTestV3" "strategy": "StrategyTestV3"
}) })
backtesting = Backtesting(default_conf) backtesting = Backtesting(default_conf)
backtesting._can_short = True
backtesting._set_strategy(backtesting.strategylist[0]) backtesting._set_strategy(backtesting.strategylist[0])
pair = 'XRP/USDT' pair = 'XRP/USDT'
row = [ row = [
@ -120,18 +126,19 @@ def test_backtest_position_adjustment_detailed(default_conf, fee, mocker) -> Non
'', # enter_tag '', # enter_tag
'', # exit_tag '', # exit_tag
] ]
backtesting.strategy.leverage = MagicMock(return_value=leverage)
trade = backtesting._enter_trade(pair, row=row, direction='long') trade = backtesting._enter_trade(pair, row=row, direction='long')
trade.orders[0].close_bt_order(row[0], trade) trade.orders[0].close_bt_order(row[0], trade)
assert trade assert trade
assert pytest.approx(trade.stake_amount) == 100.0 assert pytest.approx(trade.stake_amount) == 100.0
assert pytest.approx(trade.amount) == 47.61904762 assert pytest.approx(trade.amount) == 47.61904762 * leverage
assert len(trade.orders) == 1 assert len(trade.orders) == 1
backtesting.strategy.adjust_trade_position = MagicMock(return_value=None) backtesting.strategy.adjust_trade_position = MagicMock(return_value=None)
trade = backtesting._get_adjust_trade_entry_for_candle(trade, row) trade = backtesting._get_adjust_trade_entry_for_candle(trade, row)
assert trade assert trade
assert pytest.approx(trade.stake_amount) == 100.0 assert pytest.approx(trade.stake_amount) == 100.0
assert pytest.approx(trade.amount) == 47.61904762 assert pytest.approx(trade.amount) == 47.61904762 * leverage
assert len(trade.orders) == 1 assert len(trade.orders) == 1
# Increase position by 100 # Increase position by 100
backtesting.strategy.adjust_trade_position = MagicMock(return_value=100) backtesting.strategy.adjust_trade_position = MagicMock(return_value=100)
@ -140,7 +147,7 @@ def test_backtest_position_adjustment_detailed(default_conf, fee, mocker) -> Non
assert trade assert trade
assert pytest.approx(trade.stake_amount) == 200.0 assert pytest.approx(trade.stake_amount) == 200.0
assert pytest.approx(trade.amount) == 95.23809524 assert pytest.approx(trade.amount) == 95.23809524 * leverage
assert len(trade.orders) == 2 assert len(trade.orders) == 2
# Reduce by more than amount - no change to trade. # Reduce by more than amount - no change to trade.
@ -150,7 +157,7 @@ def test_backtest_position_adjustment_detailed(default_conf, fee, mocker) -> Non
assert trade assert trade
assert pytest.approx(trade.stake_amount) == 200.0 assert pytest.approx(trade.stake_amount) == 200.0
assert pytest.approx(trade.amount) == 95.23809524 assert pytest.approx(trade.amount) == 95.23809524 * leverage
assert len(trade.orders) == 2 assert len(trade.orders) == 2
assert trade.nr_of_successful_entries == 2 assert trade.nr_of_successful_entries == 2
@ -160,7 +167,7 @@ def test_backtest_position_adjustment_detailed(default_conf, fee, mocker) -> Non
assert trade assert trade
assert pytest.approx(trade.stake_amount) == 100.0 assert pytest.approx(trade.stake_amount) == 100.0
assert pytest.approx(trade.amount) == 47.61904762 assert pytest.approx(trade.amount) == 47.61904762 * leverage
assert len(trade.orders) == 3 assert len(trade.orders) == 3
assert trade.nr_of_successful_entries == 2 assert trade.nr_of_successful_entries == 2
assert trade.nr_of_successful_exits == 1 assert trade.nr_of_successful_exits == 1
@ -171,7 +178,7 @@ def test_backtest_position_adjustment_detailed(default_conf, fee, mocker) -> Non
assert trade assert trade
assert pytest.approx(trade.stake_amount) == 100.0 assert pytest.approx(trade.stake_amount) == 100.0
assert pytest.approx(trade.amount) == 47.61904762 assert pytest.approx(trade.amount) == 47.61904762 * leverage
assert len(trade.orders) == 3 assert len(trade.orders) == 3
assert trade.nr_of_successful_entries == 2 assert trade.nr_of_successful_entries == 2
assert trade.nr_of_successful_exits == 1 assert trade.nr_of_successful_exits == 1

View File

@ -9,6 +9,7 @@ import pytest
import time_machine import time_machine
from freqtrade.constants import AVAILABLE_PAIRLISTS from freqtrade.constants import AVAILABLE_PAIRLISTS
from freqtrade.data.dataprovider import DataProvider
from freqtrade.enums import CandleType, RunMode from freqtrade.enums import CandleType, RunMode
from freqtrade.exceptions import OperationalException from freqtrade.exceptions import OperationalException
from freqtrade.persistence import Trade from freqtrade.persistence import Trade
@ -40,6 +41,12 @@ def whitelist_conf(default_conf):
"sort_key": "quoteVolume", "sort_key": "quoteVolume",
}, },
] ]
default_conf.update({
"external_message_consumer": {
"enabled": True,
"producers": [],
}
})
return default_conf return default_conf
@ -126,7 +133,7 @@ def test_log_cached(mocker, static_pl_conf, markets, tickers):
def test_load_pairlist_noexist(mocker, markets, default_conf): def test_load_pairlist_noexist(mocker, markets, default_conf):
freqtrade = get_patched_freqtradebot(mocker, default_conf) freqtrade = get_patched_freqtradebot(mocker, default_conf)
mocker.patch('freqtrade.exchange.Exchange.markets', PropertyMock(return_value=markets)) mocker.patch('freqtrade.exchange.Exchange.markets', PropertyMock(return_value=markets))
plm = PairListManager(freqtrade.exchange, default_conf) plm = PairListManager(freqtrade.exchange, default_conf, MagicMock())
with pytest.raises(OperationalException, with pytest.raises(OperationalException,
match=r"Impossible to load Pairlist 'NonexistingPairList'. " match=r"Impossible to load Pairlist 'NonexistingPairList'. "
r"This class does not exist or contains Python code errors."): r"This class does not exist or contains Python code errors."):
@ -137,7 +144,7 @@ def test_load_pairlist_noexist(mocker, markets, default_conf):
def test_load_pairlist_verify_multi(mocker, markets_static, default_conf): def test_load_pairlist_verify_multi(mocker, markets_static, default_conf):
freqtrade = get_patched_freqtradebot(mocker, default_conf) freqtrade = get_patched_freqtradebot(mocker, default_conf)
mocker.patch('freqtrade.exchange.Exchange.markets', PropertyMock(return_value=markets_static)) mocker.patch('freqtrade.exchange.Exchange.markets', PropertyMock(return_value=markets_static))
plm = PairListManager(freqtrade.exchange, default_conf) plm = PairListManager(freqtrade.exchange, default_conf, MagicMock())
# Call different versions one after the other, should always consider what was passed in # Call different versions one after the other, should always consider what was passed in
# and have no side-effects (therefore the same check multiple times) # and have no side-effects (therefore the same check multiple times)
assert plm.verify_whitelist(['ETH/BTC', 'XRP/BTC', ], print) == ['ETH/BTC', 'XRP/BTC'] assert plm.verify_whitelist(['ETH/BTC', 'XRP/BTC', ], print) == ['ETH/BTC', 'XRP/BTC']
@ -269,7 +276,7 @@ def test_refresh_pairlist_dynamic(mocker, shitcoinmarkets, tickers, whitelist_co
with pytest.raises(OperationalException, with pytest.raises(OperationalException,
match=r'`number_assets` not specified. Please check your configuration ' match=r'`number_assets` not specified. Please check your configuration '
r'for "pairlist.config.number_assets"'): r'for "pairlist.config.number_assets"'):
PairListManager(freqtrade.exchange, whitelist_conf) PairListManager(freqtrade.exchange, whitelist_conf, MagicMock())
def test_refresh_pairlist_dynamic_2(mocker, shitcoinmarkets, tickers, whitelist_conf_2): def test_refresh_pairlist_dynamic_2(mocker, shitcoinmarkets, tickers, whitelist_conf_2):
@ -694,7 +701,7 @@ def test_PrecisionFilter_error(mocker, whitelist_conf) -> None:
with pytest.raises(OperationalException, with pytest.raises(OperationalException,
match=r"PrecisionFilter can only work with stoploss defined\..*"): match=r"PrecisionFilter can only work with stoploss defined\..*"):
PairListManager(MagicMock, whitelist_conf) PairListManager(MagicMock, whitelist_conf, MagicMock())
def test_PerformanceFilter_error(mocker, whitelist_conf, caplog) -> None: def test_PerformanceFilter_error(mocker, whitelist_conf, caplog) -> None:
@ -703,7 +710,7 @@ def test_PerformanceFilter_error(mocker, whitelist_conf, caplog) -> None:
del Trade.query del Trade.query
mocker.patch('freqtrade.exchange.Exchange.exchange_has', MagicMock(return_value=True)) mocker.patch('freqtrade.exchange.Exchange.exchange_has', MagicMock(return_value=True))
exchange = get_patched_exchange(mocker, whitelist_conf) exchange = get_patched_exchange(mocker, whitelist_conf)
pm = PairListManager(exchange, whitelist_conf) pm = PairListManager(exchange, whitelist_conf, MagicMock())
pm.refresh_pairlist() pm.refresh_pairlist()
assert log_has("PerformanceFilter is not available in this mode.", caplog) assert log_has("PerformanceFilter is not available in this mode.", caplog)
@ -1167,6 +1174,10 @@ def test_spreadfilter_invalid_data(mocker, default_conf, markets, tickers, caplo
"[{'OffsetFilter': 'OffsetFilter - Taking 10 Pairs, starting from 5.'}]", "[{'OffsetFilter': 'OffsetFilter - Taking 10 Pairs, starting from 5.'}]",
None None
), ),
({"method": "ProducerPairList"},
"[{'ProducerPairList': 'ProducerPairList - default'}]",
None
),
]) ])
def test_pricefilter_desc(mocker, whitelist_conf, markets, pairlistconfig, def test_pricefilter_desc(mocker, whitelist_conf, markets, pairlistconfig,
desc_expected, exception_expected): desc_expected, exception_expected):
@ -1341,3 +1352,77 @@ def test_expand_pairlist_keep_invalid(wildcardlist, pairs, expected):
expand_pairlist(wildcardlist, pairs, keep_invalid=True) expand_pairlist(wildcardlist, pairs, keep_invalid=True)
else: else:
assert sorted(expand_pairlist(wildcardlist, pairs, keep_invalid=True)) == sorted(expected) assert sorted(expand_pairlist(wildcardlist, pairs, keep_invalid=True)) == sorted(expected)
def test_ProducerPairlist_no_emc(mocker, whitelist_conf):
mocker.patch('freqtrade.exchange.Exchange.exchange_has', MagicMock(return_value=True))
whitelist_conf['pairlists'] = [
{
"method": "ProducerPairList",
"number_assets": 10,
"producer_name": "hello_world",
}
]
del whitelist_conf['external_message_consumer']
with pytest.raises(OperationalException,
match=r"ProducerPairList requires external_message_consumer to be enabled."):
get_patched_freqtradebot(mocker, whitelist_conf)
def test_ProducerPairlist(mocker, whitelist_conf, markets):
mocker.patch('freqtrade.exchange.Exchange.exchange_has', MagicMock(return_value=True))
mocker.patch.multiple('freqtrade.exchange.Exchange',
markets=PropertyMock(return_value=markets),
exchange_has=MagicMock(return_value=True),
)
whitelist_conf['pairlists'] = [
{
"method": "ProducerPairList",
"number_assets": 2,
"producer_name": "hello_world",
}
]
whitelist_conf.update({
"external_message_consumer": {
"enabled": True,
"producers": [
{
"name": "hello_world",
"host": "null",
"port": 9891,
"ws_token": "dummy",
}
]
}
})
exchange = get_patched_exchange(mocker, whitelist_conf)
dp = DataProvider(whitelist_conf, exchange, None)
pairs = ['ETH/BTC', 'LTC/BTC', 'XRP/BTC']
# different producer
dp._set_producer_pairs(pairs + ['MEEP/USDT'], 'default')
pm = PairListManager(exchange, whitelist_conf, dp)
pm.refresh_pairlist()
assert pm.whitelist == []
# proper producer
dp._set_producer_pairs(pairs, 'hello_world')
pm.refresh_pairlist()
# Pairlist reduced to 2
assert pm.whitelist == pairs[:2]
assert len(pm.whitelist) == 2
whitelist_conf['exchange']['pair_whitelist'] = ['TKN/BTC']
whitelist_conf['pairlists'] = [
{"method": "StaticPairList"},
{
"method": "ProducerPairList",
"producer_name": "hello_world",
}
]
pm = PairListManager(exchange, whitelist_conf, dp)
pm.refresh_pairlist()
assert len(pm.whitelist) == 4
assert pm.whitelist == ['TKN/BTC'] + pairs

View File

@ -207,10 +207,13 @@ async def test_emc_create_connection_invalid_port(default_conf, caplog, mocker):
}) })
dp = DataProvider(default_conf, None, None, None) dp = DataProvider(default_conf, None, None, None)
# Handle start explicitly to avoid messing with threading in tests
mocker.patch("freqtrade.rpc.external_message_consumer.ExternalMessageConsumer.start",)
emc = ExternalMessageConsumer(default_conf, dp) emc = ExternalMessageConsumer(default_conf, dp)
try: try:
await asyncio.sleep(0.01) emc._running = True
await emc._create_connection(emc.producers[0], asyncio.Lock())
assert log_has_re(r".+ is an invalid WebSocket URL .+", caplog) assert log_has_re(r".+ is an invalid WebSocket URL .+", caplog)
finally: finally:
emc.shutdown() emc.shutdown()

View File

@ -365,6 +365,14 @@ def test_exception_send_msg(default_conf, mocker, caplog):
with pytest.raises(NotImplementedError): with pytest.raises(NotImplementedError):
webhook.send_msg(msg) webhook.send_msg(msg)
# Test no failure for not implemented but known messagetypes
for e in RPCMessageType:
msg = {
'type': e,
'status': 'whatever'
}
webhook.send_msg(msg)
def test__send_msg(default_conf, mocker, caplog): def test__send_msg(default_conf, mocker, caplog):
default_conf["webhook"] = get_webhook_dict() default_conf["webhook"] = get_webhook_dict()

View File

@ -288,7 +288,7 @@ def test_advise_all_indicators(default_conf, testdatadir) -> None:
data = load_data(testdatadir, '1m', ['UNITTEST/BTC'], timerange=timerange, data = load_data(testdatadir, '1m', ['UNITTEST/BTC'], timerange=timerange,
fill_up_missing=True) fill_up_missing=True)
processed = strategy.advise_all_indicators(data) processed = strategy.advise_all_indicators(data)
assert len(processed['UNITTEST/BTC']) == 102 # partial candle was removed assert len(processed['UNITTEST/BTC']) == 103
def test_populate_any_indicators(default_conf, testdatadir) -> None: def test_populate_any_indicators(default_conf, testdatadir) -> None:
@ -300,7 +300,7 @@ def test_populate_any_indicators(default_conf, testdatadir) -> None:
processed = strategy.populate_any_indicators('UNITTEST/BTC', data, '5m') processed = strategy.populate_any_indicators('UNITTEST/BTC', data, '5m')
assert processed == data assert processed == data
assert id(processed) == id(data) assert id(processed) == id(data)
assert len(processed['UNITTEST/BTC']) == 102 # partial candle was removed assert len(processed['UNITTEST/BTC']) == 103
def test_freqai_not_initialized(default_conf) -> None: def test_freqai_not_initialized(default_conf) -> None:

View File

@ -28,6 +28,7 @@ from tests.conftest import (create_mock_trades, create_mock_trades_usdt, get_pat
from tests.conftest_trades import (MOCK_TRADE_COUNT, entry_side, exit_side, mock_order_1, from tests.conftest_trades import (MOCK_TRADE_COUNT, entry_side, exit_side, mock_order_1,
mock_order_2, mock_order_2_sell, mock_order_3, mock_order_3_sell, mock_order_2, mock_order_2_sell, mock_order_3, mock_order_3_sell,
mock_order_4, mock_order_5_stoploss, mock_order_6_sell) mock_order_4, mock_order_5_stoploss, mock_order_6_sell)
from tests.conftest_trades_usdt import mock_trade_usdt_4
def patch_RPCManager(mocker) -> MagicMock: def patch_RPCManager(mocker) -> MagicMock:
@ -1060,6 +1061,7 @@ def test_add_stoploss_on_exchange(mocker, default_conf_usdt, limit_order, is_sho
freqtrade = FreqtradeBot(default_conf_usdt) freqtrade = FreqtradeBot(default_conf_usdt)
freqtrade.strategy.order_types['stoploss_on_exchange'] = True freqtrade.strategy.order_types['stoploss_on_exchange'] = True
# TODO: should not be magicmock
trade = MagicMock() trade = MagicMock()
trade.is_short = is_short trade.is_short = is_short
trade.open_order_id = None trade.open_order_id = None
@ -1101,6 +1103,7 @@ def test_handle_stoploss_on_exchange(mocker, default_conf_usdt, fee, caplog, is_
# First case: when stoploss is not yet set but the order is open # First case: when stoploss is not yet set but the order is open
# should get the stoploss order id immediately # should get the stoploss order id immediately
# and should return false as no trade actually happened # and should return false as no trade actually happened
# TODO: should not be magicmock
trade = MagicMock() trade = MagicMock()
trade.is_short = is_short trade.is_short = is_short
trade.is_open = True trade.is_open = True
@ -1879,6 +1882,7 @@ def test_exit_positions(mocker, default_conf_usdt, limit_order, is_short, caplog
return_value=limit_order[entry_side(is_short)]) return_value=limit_order[entry_side(is_short)])
mocker.patch('freqtrade.exchange.Exchange.get_trades_for_order', return_value=[]) mocker.patch('freqtrade.exchange.Exchange.get_trades_for_order', return_value=[])
# TODO: should not be magicmock
trade = MagicMock() trade = MagicMock()
trade.is_short = is_short trade.is_short = is_short
trade.open_order_id = '123' trade.open_order_id = '123'
@ -1902,6 +1906,7 @@ def test_exit_positions_exception(mocker, default_conf_usdt, limit_order, caplog
order = limit_order[entry_side(is_short)] order = limit_order[entry_side(is_short)]
mocker.patch('freqtrade.exchange.Exchange.fetch_order', return_value=order) mocker.patch('freqtrade.exchange.Exchange.fetch_order', return_value=order)
# TODO: should not be magicmock
trade = MagicMock() trade = MagicMock()
trade.is_short = is_short trade.is_short = is_short
trade.open_order_id = None trade.open_order_id = None
@ -2042,6 +2047,7 @@ def test_update_trade_state_exception(mocker, default_conf_usdt, is_short, limit
freqtrade = get_patched_freqtradebot(mocker, default_conf_usdt) freqtrade = get_patched_freqtradebot(mocker, default_conf_usdt)
mocker.patch('freqtrade.exchange.Exchange.fetch_order', return_value=order) mocker.patch('freqtrade.exchange.Exchange.fetch_order', return_value=order)
# TODO: should not be magicmock
trade = MagicMock() trade = MagicMock()
trade.open_order_id = '123' trade.open_order_id = '123'
trade.amount = 123 trade.amount = 123
@ -2060,6 +2066,7 @@ def test_update_trade_state_orderexception(mocker, default_conf_usdt, caplog) ->
mocker.patch('freqtrade.exchange.Exchange.fetch_order', mocker.patch('freqtrade.exchange.Exchange.fetch_order',
MagicMock(side_effect=InvalidOrderException)) MagicMock(side_effect=InvalidOrderException))
# TODO: should not be magicmock
trade = MagicMock() trade = MagicMock()
trade.open_order_id = '123' trade.open_order_id = '123'
@ -2661,6 +2668,7 @@ def test_manage_open_orders_exit_usercustom(
rpc_mock = patch_RPCManager(mocker) rpc_mock = patch_RPCManager(mocker)
cancel_order_mock = MagicMock() cancel_order_mock = MagicMock()
patch_exchange(mocker) patch_exchange(mocker)
mocker.patch('freqtrade.exchange.Exchange.get_min_pair_stake_amount', return_value=0.0)
et_mock = mocker.patch('freqtrade.freqtradebot.FreqtradeBot.execute_trade_exit') et_mock = mocker.patch('freqtrade.freqtradebot.FreqtradeBot.execute_trade_exit')
mocker.patch.multiple( mocker.patch.multiple(
'freqtrade.exchange.Exchange', 'freqtrade.exchange.Exchange',
@ -2673,7 +2681,6 @@ def test_manage_open_orders_exit_usercustom(
open_trade_usdt.open_date = arrow.utcnow().shift(hours=-5).datetime open_trade_usdt.open_date = arrow.utcnow().shift(hours=-5).datetime
open_trade_usdt.close_date = arrow.utcnow().shift(minutes=-601).datetime open_trade_usdt.close_date = arrow.utcnow().shift(minutes=-601).datetime
open_trade_usdt.close_profit_abs = 0.001 open_trade_usdt.close_profit_abs = 0.001
open_trade_usdt.is_open = False
Trade.query.session.add(open_trade_usdt) Trade.query.session.add(open_trade_usdt)
Trade.commit() Trade.commit()
@ -2687,7 +2694,6 @@ def test_manage_open_orders_exit_usercustom(
freqtrade.manage_open_orders() freqtrade.manage_open_orders()
assert cancel_order_mock.call_count == 0 assert cancel_order_mock.call_count == 0
assert rpc_mock.call_count == 1 assert rpc_mock.call_count == 1
assert open_trade_usdt.is_open is False
assert freqtrade.strategy.check_exit_timeout.call_count == 1 assert freqtrade.strategy.check_exit_timeout.call_count == 1
assert freqtrade.strategy.check_entry_timeout.call_count == 0 assert freqtrade.strategy.check_entry_timeout.call_count == 0
@ -2697,7 +2703,6 @@ def test_manage_open_orders_exit_usercustom(
freqtrade.manage_open_orders() freqtrade.manage_open_orders()
assert cancel_order_mock.call_count == 0 assert cancel_order_mock.call_count == 0
assert rpc_mock.call_count == 1 assert rpc_mock.call_count == 1
assert open_trade_usdt.is_open is False
assert freqtrade.strategy.check_exit_timeout.call_count == 1 assert freqtrade.strategy.check_exit_timeout.call_count == 1
assert freqtrade.strategy.check_entry_timeout.call_count == 0 assert freqtrade.strategy.check_entry_timeout.call_count == 0
@ -2707,7 +2712,6 @@ def test_manage_open_orders_exit_usercustom(
freqtrade.manage_open_orders() freqtrade.manage_open_orders()
assert cancel_order_mock.call_count == 1 assert cancel_order_mock.call_count == 1
assert rpc_mock.call_count == 2 assert rpc_mock.call_count == 2
assert open_trade_usdt.is_open is True
assert freqtrade.strategy.check_exit_timeout.call_count == 1 assert freqtrade.strategy.check_exit_timeout.call_count == 1
assert freqtrade.strategy.check_entry_timeout.call_count == 0 assert freqtrade.strategy.check_entry_timeout.call_count == 0
@ -2748,14 +2752,14 @@ def test_manage_open_orders_exit(
'freqtrade.exchange.Exchange', 'freqtrade.exchange.Exchange',
fetch_ticker=ticker_usdt, fetch_ticker=ticker_usdt,
fetch_order=MagicMock(return_value=limit_sell_order_old), fetch_order=MagicMock(return_value=limit_sell_order_old),
cancel_order=cancel_order_mock cancel_order=cancel_order_mock,
get_min_pair_stake_amount=MagicMock(return_value=0),
) )
freqtrade = FreqtradeBot(default_conf_usdt) freqtrade = FreqtradeBot(default_conf_usdt)
open_trade_usdt.open_date = arrow.utcnow().shift(hours=-5).datetime open_trade_usdt.open_date = arrow.utcnow().shift(hours=-5).datetime
open_trade_usdt.close_date = arrow.utcnow().shift(minutes=-601).datetime open_trade_usdt.close_date = arrow.utcnow().shift(minutes=-601).datetime
open_trade_usdt.close_profit_abs = 0.001 open_trade_usdt.close_profit_abs = 0.001
open_trade_usdt.is_open = False
open_trade_usdt.is_short = is_short open_trade_usdt.is_short = is_short
Trade.query.session.add(open_trade_usdt) Trade.query.session.add(open_trade_usdt)
@ -2796,7 +2800,6 @@ def test_check_handle_cancelled_exit(
open_trade_usdt.open_date = arrow.utcnow().shift(hours=-5).datetime open_trade_usdt.open_date = arrow.utcnow().shift(hours=-5).datetime
open_trade_usdt.close_date = arrow.utcnow().shift(minutes=-601).datetime open_trade_usdt.close_date = arrow.utcnow().shift(minutes=-601).datetime
open_trade_usdt.is_open = False
open_trade_usdt.is_short = is_short open_trade_usdt.is_short = is_short
Trade.query.session.add(open_trade_usdt) Trade.query.session.add(open_trade_usdt)
@ -2984,7 +2987,7 @@ def test_manage_open_orders_exception(default_conf_usdt, ticker_usdt, open_trade
@pytest.mark.parametrize("is_short", [False, True]) @pytest.mark.parametrize("is_short", [False, True])
def test_handle_cancel_enter(mocker, caplog, default_conf_usdt, limit_order, is_short) -> None: def test_handle_cancel_enter(mocker, caplog, default_conf_usdt, limit_order, is_short, fee) -> None:
patch_RPCManager(mocker) patch_RPCManager(mocker)
patch_exchange(mocker) patch_exchange(mocker)
l_order = limit_order[entry_side(is_short)] l_order = limit_order[entry_side(is_short)]
@ -2998,15 +3001,12 @@ def test_handle_cancel_enter(mocker, caplog, default_conf_usdt, limit_order, is_
freqtrade = FreqtradeBot(default_conf_usdt) freqtrade = FreqtradeBot(default_conf_usdt)
freqtrade._notify_enter_cancel = MagicMock() freqtrade._notify_enter_cancel = MagicMock()
# TODO: Convert to real trade trade = mock_trade_usdt_4(fee, is_short)
trade = MagicMock() Trade.query.session.add(trade)
trade.pair = 'LTC/USDT' Trade.commit()
trade.open_rate = 200
trade.is_short = False
trade.entry_side = "buy"
l_order['filled'] = 0.0 l_order['filled'] = 0.0
l_order['status'] = 'open' l_order['status'] = 'open'
trade.nr_of_successful_entries = 0
reason = CANCEL_REASON['TIMEOUT'] reason = CANCEL_REASON['TIMEOUT']
assert freqtrade.handle_cancel_enter(trade, l_order, reason) assert freqtrade.handle_cancel_enter(trade, l_order, reason)
assert cancel_order_mock.call_count == 1 assert cancel_order_mock.call_count == 1
@ -3038,7 +3038,7 @@ def test_handle_cancel_enter(mocker, caplog, default_conf_usdt, limit_order, is_
@pytest.mark.parametrize("is_short", [False, True]) @pytest.mark.parametrize("is_short", [False, True])
@pytest.mark.parametrize("limit_buy_order_canceled_empty", ['binance', 'ftx', 'kraken', 'bittrex'], @pytest.mark.parametrize("limit_buy_order_canceled_empty", ['binance', 'ftx', 'kraken', 'bittrex'],
indirect=['limit_buy_order_canceled_empty']) indirect=['limit_buy_order_canceled_empty'])
def test_handle_cancel_enter_exchanges(mocker, caplog, default_conf_usdt, is_short, def test_handle_cancel_enter_exchanges(mocker, caplog, default_conf_usdt, is_short, fee,
limit_buy_order_canceled_empty) -> None: limit_buy_order_canceled_empty) -> None:
patch_RPCManager(mocker) patch_RPCManager(mocker)
patch_exchange(mocker) patch_exchange(mocker)
@ -3049,11 +3049,10 @@ def test_handle_cancel_enter_exchanges(mocker, caplog, default_conf_usdt, is_sho
freqtrade = FreqtradeBot(default_conf_usdt) freqtrade = FreqtradeBot(default_conf_usdt)
reason = CANCEL_REASON['TIMEOUT'] reason = CANCEL_REASON['TIMEOUT']
# TODO: Convert to real trade
trade = MagicMock() trade = mock_trade_usdt_4(fee, is_short)
trade.nr_of_successful_entries = 0 Trade.query.session.add(trade)
trade.pair = 'LTC/ETH' Trade.commit()
trade.entry_side = "sell" if is_short else "buy"
assert freqtrade.handle_cancel_enter(trade, limit_buy_order_canceled_empty, reason) assert freqtrade.handle_cancel_enter(trade, limit_buy_order_canceled_empty, reason)
assert cancel_order_mock.call_count == 0 assert cancel_order_mock.call_count == 0
assert log_has_re( assert log_has_re(
@ -3071,7 +3070,7 @@ def test_handle_cancel_enter_exchanges(mocker, caplog, default_conf_usdt, is_sho
'String Return value', 'String Return value',
123 123
]) ])
def test_handle_cancel_enter_corder_empty(mocker, default_conf_usdt, limit_order, is_short, def test_handle_cancel_enter_corder_empty(mocker, default_conf_usdt, limit_order, is_short, fee,
cancelorder) -> None: cancelorder) -> None:
patch_RPCManager(mocker) patch_RPCManager(mocker)
patch_exchange(mocker) patch_exchange(mocker)
@ -3079,19 +3078,15 @@ def test_handle_cancel_enter_corder_empty(mocker, default_conf_usdt, limit_order
cancel_order_mock = MagicMock(return_value=cancelorder) cancel_order_mock = MagicMock(return_value=cancelorder)
mocker.patch.multiple( mocker.patch.multiple(
'freqtrade.exchange.Exchange', 'freqtrade.exchange.Exchange',
cancel_order=cancel_order_mock cancel_order=cancel_order_mock,
fetch_order=MagicMock(side_effect=InvalidOrderException)
) )
freqtrade = FreqtradeBot(default_conf_usdt) freqtrade = FreqtradeBot(default_conf_usdt)
freqtrade._notify_enter_cancel = MagicMock() freqtrade._notify_enter_cancel = MagicMock()
# TODO: Convert to real trade trade = mock_trade_usdt_4(fee, is_short)
trade = MagicMock() Trade.query.session.add(trade)
trade.pair = 'LTC/USDT' Trade.commit()
trade.entry_side = "buy"
trade.open_rate = 200
trade.entry_side = "buy"
trade.open_order_id = "open_order_noop"
trade.nr_of_successful_entries = 0
l_order['filled'] = 0.0 l_order['filled'] = 0.0
l_order['status'] = 'open' l_order['status'] = 'open'
reason = CANCEL_REASON['TIMEOUT'] reason = CANCEL_REASON['TIMEOUT']
@ -3121,20 +3116,21 @@ def test_handle_cancel_exit_limit(mocker, default_conf_usdt, fee) -> None:
amount=2, amount=2,
exchange='binance', exchange='binance',
open_rate=0.245441, open_rate=0.245441,
open_order_id="123456", open_order_id="sell_123456",
open_date=arrow.utcnow().shift(days=-2).datetime, open_date=arrow.utcnow().shift(days=-2).datetime,
fee_open=fee.return_value, fee_open=fee.return_value,
fee_close=fee.return_value, fee_close=fee.return_value,
close_rate=0.555, close_rate=0.555,
close_date=arrow.utcnow().datetime, close_date=arrow.utcnow().datetime,
exit_reason="sell_reason_whatever", exit_reason="sell_reason_whatever",
stake_amount=0.245441 * 2,
) )
trade.orders = [ trade.orders = [
Order( Order(
ft_order_side='buy', ft_order_side='buy',
ft_pair=trade.pair, ft_pair=trade.pair,
ft_is_open=True, ft_is_open=False,
order_id='123456', order_id='buy_123456',
status="closed", status="closed",
symbol=trade.pair, symbol=trade.pair,
order_type="market", order_type="market",
@ -3147,15 +3143,33 @@ def test_handle_cancel_exit_limit(mocker, default_conf_usdt, fee) -> None:
order_date=trade.open_date, order_date=trade.open_date,
order_filled_date=trade.open_date, order_filled_date=trade.open_date,
), ),
Order(
ft_order_side='sell',
ft_pair=trade.pair,
ft_is_open=True,
order_id='sell_123456',
status="open",
symbol=trade.pair,
order_type="limit",
side="sell",
price=trade.open_rate,
average=trade.open_rate,
filled=0.0,
remaining=trade.amount,
cost=trade.open_rate * trade.amount,
order_date=trade.open_date,
order_filled_date=trade.open_date,
),
] ]
order = {'id': "123456", order = {'id': "sell_123456",
'remaining': 1, 'remaining': 1,
'amount': 1, 'amount': 1,
'status': "open"} 'status': "open"}
reason = CANCEL_REASON['TIMEOUT'] reason = CANCEL_REASON['TIMEOUT']
send_msg_mock.reset_mock()
assert freqtrade.handle_cancel_exit(trade, order, reason) assert freqtrade.handle_cancel_exit(trade, order, reason)
assert cancel_order_mock.call_count == 1 assert cancel_order_mock.call_count == 1
assert send_msg_mock.call_count == 2 assert send_msg_mock.call_count == 1
assert trade.close_rate is None assert trade.close_rate is None
assert trade.exit_reason is None assert trade.exit_reason is None
@ -3181,11 +3195,13 @@ def test_handle_cancel_exit_limit(mocker, default_conf_usdt, fee) -> None:
def test_handle_cancel_exit_cancel_exception(mocker, default_conf_usdt) -> None: def test_handle_cancel_exit_cancel_exception(mocker, default_conf_usdt) -> None:
patch_RPCManager(mocker) patch_RPCManager(mocker)
patch_exchange(mocker) patch_exchange(mocker)
mocker.patch( mocker.patch('freqtrade.exchange.Exchange.get_min_pair_stake_amount', return_value=0.0)
'freqtrade.exchange.Exchange.cancel_order_with_result', side_effect=InvalidOrderException()) mocker.patch('freqtrade.exchange.Exchange.cancel_order_with_result',
side_effect=InvalidOrderException())
freqtrade = FreqtradeBot(default_conf_usdt) freqtrade = FreqtradeBot(default_conf_usdt)
# TODO: should not be magicmock
trade = MagicMock() trade = MagicMock()
reason = CANCEL_REASON['TIMEOUT'] reason = CANCEL_REASON['TIMEOUT']
order = {'remaining': 1, order = {'remaining': 1,

View File

@ -2,7 +2,7 @@ from unittest.mock import MagicMock
import pytest import pytest
from freqtrade.enums import ExitCheckTuple, ExitType from freqtrade.enums import ExitCheckTuple, ExitType, TradingMode
from freqtrade.persistence import Trade from freqtrade.persistence import Trade
from freqtrade.persistence.models import Order from freqtrade.persistence.models import Order
from freqtrade.rpc.rpc import RPC from freqtrade.rpc.rpc import RPC
@ -455,10 +455,12 @@ def test_dca_order_adjust(default_conf_usdt, ticker_usdt, fee, mocker) -> None:
assert pytest.approx(trade.orders[-1].amount) == 61.538461232 assert pytest.approx(trade.orders[-1].amount) == 61.538461232
def test_dca_exiting(default_conf_usdt, ticker_usdt, fee, mocker, caplog) -> None: @pytest.mark.parametrize('leverage', [1, 2])
def test_dca_exiting(default_conf_usdt, ticker_usdt, fee, mocker, caplog, leverage) -> None:
default_conf_usdt['position_adjustment_enable'] = True default_conf_usdt['position_adjustment_enable'] = True
freqtrade = get_patched_freqtradebot(mocker, default_conf_usdt) freqtrade = get_patched_freqtradebot(mocker, default_conf_usdt)
freqtrade.trading_mode = TradingMode.FUTURES
mocker.patch.multiple( mocker.patch.multiple(
'freqtrade.exchange.Exchange', 'freqtrade.exchange.Exchange',
fetch_ticker=ticker_usdt, fetch_ticker=ticker_usdt,
@ -467,15 +469,17 @@ def test_dca_exiting(default_conf_usdt, ticker_usdt, fee, mocker, caplog) -> Non
price_to_precision=lambda s, x, y: y, price_to_precision=lambda s, x, y: y,
get_min_pair_stake_amount=MagicMock(return_value=10), get_min_pair_stake_amount=MagicMock(return_value=10),
) )
mocker.patch("freqtrade.exchange.Exchange.get_max_leverage", return_value=10)
patch_get_signal(freqtrade) patch_get_signal(freqtrade)
freqtrade.strategy.leverage = MagicMock(return_value=leverage)
freqtrade.enter_positions() freqtrade.enter_positions()
assert len(Trade.get_trades().all()) == 1 assert len(Trade.get_trades().all()) == 1
trade = Trade.get_trades().first() trade = Trade.get_trades().first()
assert len(trade.orders) == 1 assert len(trade.orders) == 1
assert pytest.approx(trade.stake_amount) == 60 assert pytest.approx(trade.stake_amount) == 60
assert pytest.approx(trade.amount) == 30.0 assert pytest.approx(trade.amount) == 30.0 * leverage
assert trade.open_rate == 2.0 assert trade.open_rate == 2.0
# Too small size # Too small size
@ -484,8 +488,9 @@ def test_dca_exiting(default_conf_usdt, ticker_usdt, fee, mocker, caplog) -> Non
trade = Trade.get_trades().first() trade = Trade.get_trades().first()
assert len(trade.orders) == 1 assert len(trade.orders) == 1
assert pytest.approx(trade.stake_amount) == 60 assert pytest.approx(trade.stake_amount) == 60
assert pytest.approx(trade.amount) == 30.0 assert pytest.approx(trade.amount) == 30.0 * leverage
assert log_has_re("Remaining amount of 1.6.* would be smaller than the minimum of 10.", caplog) assert log_has_re(
r"Remaining amount of \d\.\d+.* would be smaller than the minimum of 10.", caplog)
freqtrade.strategy.adjust_trade_position = MagicMock(return_value=-20) freqtrade.strategy.adjust_trade_position = MagicMock(return_value=-20)
@ -494,7 +499,7 @@ def test_dca_exiting(default_conf_usdt, ticker_usdt, fee, mocker, caplog) -> Non
assert len(trade.orders) == 2 assert len(trade.orders) == 2
assert trade.orders[-1].ft_order_side == 'sell' assert trade.orders[-1].ft_order_side == 'sell'
assert pytest.approx(trade.stake_amount) == 40.198 assert pytest.approx(trade.stake_amount) == 40.198
assert pytest.approx(trade.amount) == 20.099 assert pytest.approx(trade.amount) == 20.099 * leverage
assert trade.open_rate == 2.0 assert trade.open_rate == 2.0
assert trade.is_open assert trade.is_open
caplog.clear() caplog.clear()