Merge branch 'develop' into cancel_partial_sell

This commit is contained in:
Matthias 2022-10-02 08:38:18 +02:00
commit a5bc75b48c
24 changed files with 146 additions and 79 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

View File

@ -60,11 +60,18 @@ Binance supports [time_in_force](configuration.md#understand-order_time_in_force
Binance supports `stoploss_on_exchange` and uses `stop-loss-limit` orders. It provides great advantages, so we recommend to benefit from it by enabling stoploss on exchange.
On futures, Binance supports both `stop-limit` as well as `stop-market` orders. You can use either `"limit"` or `"market"` in the `order_types.stoploss` configuration setting to decide which type to use.
### Binance Blacklist
### Binance Blacklist recommendation
For Binance, it is suggested to add `"BNB/<STAKE>"` to your blacklist to avoid issues, unless you are willing to maintain enough extra `BNB` on the account or unless you're willing to disable using `BNB` for fees.
Binance accounts may use `BNB` for fees, and if a trade happens to be on `BNB`, further trades may consume this position and make the initial BNB trade unsellable as the expected amount is not there anymore.
### Binance sites
Binance has been split into 2, and users must use the correct ccxt exchange ID for their exchange, otherwise API keys are not recognized.
* [binance.com](https://www.binance.com/) - International users. Use exchange id: `binance`.
* [binance.us](https://www.binance.us/) - US based users. Use exchange id: `binanceus`.
### Binance Futures
Binance has specific (unfortunately complex) [Futures Trading Quantitative Rules](https://www.binance.com/en/support/faq/4f462ebe6ff445d4a170be7d9e897272) which need to be followed, and which prohibit a too low stake-amount (among others) for too many orders.
@ -87,12 +94,14 @@ When trading on Binance Futures market, orderbook must be used because there is
},
```
### Binance sites
#### Binance futures settings
Binance has been split into 2, and users must use the correct ccxt exchange ID for their exchange, otherwise API keys are not recognized.
Users will also have to have the futures-setting "Position Mode" set to "One-way Mode", and "Asset Mode" set to "Single-Asset Mode".
These settings will be checked on startup, and freqtrade will show an error if this setting is wrong.
* [binance.com](https://www.binance.com/) - International users. Use exchange id: `binance`.
* [binance.us](https://www.binance.us/) - US based users. Use exchange id: `binanceus`.
![Binance futures settings](assets/binance_futures_settings.png)
Freqtrade will not attempt to change these settings.
## Kraken

View File

@ -27,8 +27,7 @@ Mandatory parameters are marked as **Required** and have to be set in one of the
| `weight_factor` | Weight training data points according to their recency (see details [here](freqai-feature-engineering.md#weighting-features-for-temporal-importance)). <br> **Datatype:** Positive float (typically < 1).
| `indicator_max_period_candles` | **No longer used (#7325)**. Replaced by `startup_candle_count` which is set in the [strategy](freqai-configuration.md#building-a-freqai-strategy). `startup_candle_count` is timeframe independent and defines the maximum *period* used in `populate_any_indicators()` for indicator creation. `FreqAI` uses this parameter together with the maximum timeframe in `include_time_frames` to calculate how many data points to download such that the first data point does not include a NaN <br> **Datatype:** Positive integer.
| `indicator_periods_candles` | Time periods to calculate indicators for. The indicators are added to the base indicator dataset. <br> **Datatype:** List of positive integers.
| `stratify_training_data` | Split the feature set into training and testing datasets. For example, `stratify_training_data: 2` would set every 2nd data point into a separate dataset to be pulled from during training/testing. See details about how it works [here](freqai-running.md#data-stratification-for-training-and-testing-the-model). <br> **Datatype:** Positive integer.
| `principal_component_analysis` | Automatically reduce the dimensionality of the data set using Principal Component Analysis. See details about how it works [here](#reducing-data-dimensionality-with-principal-component-analysis) <br> **Datatype:** Boolean. defaults to `false`.
| `principal_component_analysis` | Automatically reduce the dimensionality of the data set using Principal Component Analysis. See details about how it works [here](#reducing-data-dimensionality-with-principal-component-analysis) <br> **Datatype:** Boolean. defaults to `False`.
| `plot_feature_importances` | Create a feature importance plot for each model for the top/bottom `plot_feature_importances` number of features.<br> **Datatype:** Integer, defaults to `0`.
| `DI_threshold` | Activates the use of the Dissimilarity Index for outlier detection when set to > 0. See details about how it works [here](freqai-feature-engineering.md#identifying-outliers-with-the-dissimilarity-index-di). <br> **Datatype:** Positive float (typically < 1).
| `use_SVM_to_remove_outliers` | Train a support vector machine to detect and remove outliers from the training dataset, as well as from incoming data points. See details about how it works [here](freqai-feature-engineering.md#identifying-outliers-using-a-support-vector-machine-svm). <br> **Datatype:** Boolean.
@ -41,7 +40,7 @@ Mandatory parameters are marked as **Required** and have to be set in one of the
| | **Data split parameters**
| `data_split_parameters` | Include any additional parameters available from Scikit-learn `test_train_split()`, which are shown [here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) (external website). <br> **Datatype:** Dictionary.
| `test_size` | The fraction of data that should be used for testing instead of training. <br> **Datatype:** Positive float < 1.
| `shuffle` | Shuffle the training data points during training. Typically, for time-series forecasting, this is set to `False`. <br> **Datatype:** Boolean.
| `shuffle` | Shuffle the training data points during training. Typically, to not remove the chronological order of data in time-series forecasting, this is set to `False`. <br> **Datatype:** Boolean. <br> Defaut: `False`.
| | **Model training parameters**
| `model_training_parameters` | A flexible dictionary that includes all parameters available by the selected model library. For example, if you use `LightGBMRegressor`, this dictionary can contain any parameter available by the `LightGBMRegressor` [here](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRegressor.html) (external website). If you select a different model, this dictionary can contain any parameter from that model. <br> **Datatype:** Dictionary.
| `n_estimators` | The number of boosted trees to fit in regression. <br> **Datatype:** Integer.

View File

@ -105,23 +105,6 @@ During dry/live mode, FreqAI trains each coin pair sequentially (on separate thr
In the presented example config, the user will only allow predictions on models that are less than 1/2 hours old.
## Data stratification for training and testing the model
You can stratify (group) the training/testing data using:
```json
"freqai": {
"feature_parameters" : {
"stratify_training_data": 3
}
}
```
This will split the data chronologically so that every Xth data point is used to test the model after training. In the example above, the user is asking for every third data point in the dataframe to be used for
testing; the other points are used for training.
The test data is used to evaluate the performance of the model after training. If the test score is high, the model is able to capture the behavior of the data well. If the test score is low, either the model does not capture the complexity of the data, the test data is significantly different from the train data, or a different type of model should be used.
## Controlling the model learning process
Model training parameters are unique to the selected machine learning library. FreqAI allows you to set any parameter for any library using the `model_training_parameters` dictionary in the config. The example config (found in `config_examples/config_freqai.example.json`) shows some of the example parameters associated with `Catboost` and `LightGBM`, but you can add any parameters available in those libraries or any other machine learning library you choose to implement.

View File

@ -37,3 +37,12 @@ pip install -e .
# Ensure freqUI is at the latest version
freqtrade install-ui
```
### Problems updating
Update-problems usually come missing dependencies (you didn't follow the above instructions) - or from updated dependencies, which fail to install (for example TA-lib).
Please refer to the corresponding installation sections (common problems linked below)
Common problems and their solutions:
* [ta-lib update on windows](windows_installation.md#2-install-ta-lib)

View File

@ -34,7 +34,7 @@ python -m venv .env
.env\Scripts\activate.ps1
# optionally install ta-lib from wheel
# Eventually adjust the below filename to match the downloaded wheel
pip install --find-links build_helpers\ TA-Lib
pip install --find-links build_helpers\ TA-Lib -U
pip install -r requirements.txt
pip install -e .
freqtrade

View File

@ -567,6 +567,7 @@ CONF_SCHEMA = {
"properties": {
"test_size": {"type": "number"},
"random_state": {"type": "integer"},
"shuffle": {"type": "boolean", "default": False}
},
},
"model_training_parameters": {

View File

@ -68,6 +68,37 @@ class Binance(Exchange):
tickers = deep_merge_dicts(bidsasks, tickers, allow_null_overrides=False)
return tickers
@retrier
def additional_exchange_init(self) -> None:
"""
Additional exchange initialization logic.
.api will be available at this point.
Must be overridden in child methods if required.
"""
try:
if self.trading_mode == TradingMode.FUTURES and not self._config['dry_run']:
position_side = self._api.fapiPrivateGetPositionsideDual()
self._log_exchange_response('position_side_setting', position_side)
assets_margin = self._api.fapiPrivateGetMultiAssetsMargin()
self._log_exchange_response('multi_asset_margin', assets_margin)
msg = ""
if position_side.get('dualSidePosition') is True:
msg += (
"\nHedge Mode is not supported by freqtrade. "
"Please change 'Position Mode' on your binance futures account.")
if assets_margin.get('multiAssetsMargin') is True:
msg += ("\nMulti-Asset Mode is not supported by freqtrade. "
"Please change 'Asset Mode' on your binance futures account.")
if msg:
raise OperationalException(msg)
except ccxt.DDoSProtection as e:
raise DDosProtection(e) from e
except (ccxt.NetworkError, ccxt.ExchangeError) as e:
raise TemporaryError(
f'Could not set leverage due to {e.__class__.__name__}. Message: {e}') from e
except ccxt.BaseError as e:
raise OperationalException(e) from e
@retrier
def _set_leverage(
self,

View File

@ -1292,7 +1292,7 @@ class Exchange:
order = self.fetch_order(order_id, pair)
except InvalidOrderException:
logger.warning(f"Could not fetch cancelled order {order_id}.")
order = {'fee': {}, 'status': 'canceled', 'amount': amount, 'info': {}}
order = {'id': order_id, 'fee': {}, 'status': 'canceled', 'amount': amount, 'info': {}}
return order

View File

@ -78,7 +78,8 @@ class Okx(Exchange):
raise DDosProtection(e) from e
except (ccxt.NetworkError, ccxt.ExchangeError) as e:
raise TemporaryError(
f'Could not set leverage due to {e.__class__.__name__}. Message: {e}') from e
f'Error in additional_exchange_init due to {e.__class__.__name__}. Message: {e}'
) from e
except ccxt.BaseError as e:
raise OperationalException(e) from e

View File

@ -92,7 +92,7 @@ class BaseClassifierModel(IFreqaiModel):
filtered_df = dk.normalize_data_from_metadata(filtered_df)
dk.data_dictionary["prediction_features"] = filtered_df
self.data_cleaning_predict(dk, filtered_df)
self.data_cleaning_predict(dk)
predictions = self.model.predict(dk.data_dictionary["prediction_features"])
pred_df = DataFrame(predictions, columns=dk.label_list)

View File

@ -92,7 +92,7 @@ class BaseRegressionModel(IFreqaiModel):
dk.data_dictionary["prediction_features"] = filtered_df
# optional additional data cleaning/analysis
self.data_cleaning_predict(dk, filtered_df)
self.data_cleaning_predict(dk)
predictions = self.model.predict(dk.data_dictionary["prediction_features"])
pred_df = DataFrame(predictions, columns=dk.label_list)

View File

@ -423,7 +423,7 @@ class FreqaiDataDrawer:
dk.data["data_path"] = str(dk.data_path)
dk.data["model_filename"] = str(dk.model_filename)
dk.data["training_features_list"] = list(dk.data_dictionary["train_features"].columns)
dk.data["training_features_list"] = dk.training_features_list
dk.data["label_list"] = dk.label_list
# store the metadata
with open(save_path / f"{dk.model_filename}_metadata.json", "w") as fp:

View File

@ -134,20 +134,15 @@ class FreqaiDataKitchen:
"""
feat_dict = self.freqai_config["feature_parameters"]
if 'shuffle' not in self.freqai_config['data_split_parameters']:
self.freqai_config["data_split_parameters"].update({'shuffle': False})
weights: npt.ArrayLike
if feat_dict.get("weight_factor", 0) > 0:
weights = self.set_weights_higher_recent(len(filtered_dataframe))
else:
weights = np.ones(len(filtered_dataframe))
if feat_dict.get("stratify_training_data", 0) > 0:
stratification = np.zeros(len(filtered_dataframe))
for i in range(1, len(stratification)):
if i % feat_dict.get("stratify_training_data", 0) == 0:
stratification[i] = 1
else:
stratification = None
if self.freqai_config.get('data_split_parameters', {}).get('test_size', 0.1) != 0:
(
train_features,
@ -160,7 +155,6 @@ class FreqaiDataKitchen:
filtered_dataframe[: filtered_dataframe.shape[0]],
labels,
weights,
stratify=stratification,
**self.config["freqai"]["data_split_parameters"],
)
else:
@ -881,6 +875,7 @@ class FreqaiDataKitchen:
"""
column_names = dataframe.columns
features = [c for c in column_names if "%" in c]
if not features:
raise OperationalException("Could not find any features!")

View File

@ -275,7 +275,8 @@ class IFreqaiModel(ABC):
if dk.check_if_backtest_prediction_exists():
self.dd.load_metadata(dk)
self.check_if_feature_list_matches_strategy(dataframe_train, dk)
dk.find_features(dataframe_train)
self.check_if_feature_list_matches_strategy(dk)
append_df = dk.get_backtesting_prediction()
dk.append_predictions(append_df)
else:
@ -296,7 +297,6 @@ class IFreqaiModel(ABC):
else:
self.model = self.dd.load_data(pair, dk)
# self.check_if_feature_list_matches_strategy(dataframe_train, dk)
pred_df, do_preds = self.predict(dataframe_backtest, dk)
append_df = dk.get_predictions_to_append(pred_df, do_preds)
dk.append_predictions(append_df)
@ -420,7 +420,7 @@ class IFreqaiModel(ABC):
return
def check_if_feature_list_matches_strategy(
self, dataframe: DataFrame, dk: FreqaiDataKitchen
self, dk: FreqaiDataKitchen
) -> None:
"""
Ensure user is passing the proper feature set if they are reusing an `identifier` pointing
@ -429,11 +429,12 @@ class IFreqaiModel(ABC):
:param dk: FreqaiDataKitchen = non-persistent data container/analyzer for
current coin/bot loop
"""
dk.find_features(dataframe)
if "training_features_list_raw" in dk.data:
feature_list = dk.data["training_features_list_raw"]
else:
feature_list = dk.data['training_features_list']
if dk.training_features_list != feature_list:
raise OperationalException(
"Trying to access pretrained model with `identifier` "
@ -481,13 +482,16 @@ class IFreqaiModel(ABC):
if self.freqai_info["feature_parameters"].get('noise_standard_deviation', 0):
dk.add_noise_to_training_features()
def data_cleaning_predict(self, dk: FreqaiDataKitchen, dataframe: DataFrame) -> None:
def data_cleaning_predict(self, dk: FreqaiDataKitchen) -> None:
"""
Base data cleaning method for predict.
Functions here are complementary to the functions of data_cleaning_train.
"""
ft_params = self.freqai_info["feature_parameters"]
# ensure user is feeding the correct indicators to the model
self.check_if_feature_list_matches_strategy(dk)
if ft_params.get('inlier_metric_window', 0):
dk.compute_inlier_metric(set_='predict')
@ -505,9 +509,6 @@ class IFreqaiModel(ABC):
if ft_params.get("use_DBSCAN_to_remove_outliers", False):
dk.use_DBSCAN_to_remove_outliers(predict=True)
# ensure user is feeding the correct indicators to the model
self.check_if_feature_list_matches_strategy(dk.data_dictionary['prediction_features'], dk)
def model_exists(self, dk: FreqaiDataKitchen) -> bool:
"""
Given a pair and path, check if a model already exists

View File

@ -198,8 +198,10 @@ class ApiServer(RPCHandler):
logger.debug(f"Found message of type: {message.get('type')}")
# Broadcast it
await self._ws_channel_manager.broadcast(message)
# Sleep, make this configurable?
await asyncio.sleep(0.1)
# Limit messages per sec.
# Could cause problems with queue size if too low, and
# problems with network traffik if too high.
await asyncio.sleep(0.001)
except asyncio.CancelledError:
pass

View File

@ -30,9 +30,9 @@ class Discord(Webhook):
pass
def send_msg(self, msg) -> None:
logger.info(f"Sending discord message: {msg}")
if msg['type'].value in self.config['discord']:
logger.info(f"Sending discord message: {msg}")
msg['strategy'] = self.strategy
msg['timeframe'] = self.timeframe

View File

@ -61,6 +61,14 @@ class Webhook(RPCHandler):
RPCMessageType.STARTUP,
RPCMessageType.WARNING):
valuedict = whconfig.get('webhookstatus')
elif msg['type'] in (
RPCMessageType.PROTECTION_TRIGGER,
RPCMessageType.PROTECTION_TRIGGER_GLOBAL,
RPCMessageType.WHITELIST,
RPCMessageType.ANALYZED_DF,
RPCMessageType.STRATEGY_MSG):
# Don't fail for non-implemented types
return
else:
raise NotImplementedError('Unknown message type: {}'.format(msg['type']))
if not valuedict:

View File

@ -501,6 +501,24 @@ def test_fill_leverage_tiers_binance_dryrun(default_conf, mocker, leverage_tiers
assert len(v) == len(value)
def test_additional_exchange_init_binance(default_conf, mocker):
api_mock = MagicMock()
api_mock.fapiPrivateGetPositionsideDual = MagicMock(return_value={"dualSidePosition": True})
api_mock.fapiPrivateGetMultiAssetsMargin = MagicMock(return_value={"multiAssetsMargin": True})
default_conf['dry_run'] = False
default_conf['trading_mode'] = TradingMode.FUTURES
default_conf['margin_mode'] = MarginMode.ISOLATED
with pytest.raises(OperationalException,
match=r"Hedge Mode is not supported.*\nMulti-Asset Mode is not supported.*"):
get_patched_exchange(mocker, default_conf, id="binance", api_mock=api_mock)
api_mock.fapiPrivateGetPositionsideDual = MagicMock(return_value={"dualSidePosition": False})
api_mock.fapiPrivateGetMultiAssetsMargin = MagicMock(return_value={"multiAssetsMargin": False})
exchange = get_patched_exchange(mocker, default_conf, id="binance", api_mock=api_mock)
assert exchange
ccxt_exceptionhandlers(mocker, default_conf, api_mock, 'binance',
"additional_exchange_init", "fapiPrivateGetPositionsideDual")
def test__set_leverage_binance(mocker, default_conf):
api_mock = MagicMock()

View File

@ -137,6 +137,7 @@ def exchange_futures(request, exchange_conf, class_mocker):
'freqtrade.exchange.binance.Binance.fill_leverage_tiers')
class_mocker.patch('freqtrade.exchange.exchange.Exchange.fetch_trading_fees')
class_mocker.patch('freqtrade.exchange.okx.Okx.additional_exchange_init')
class_mocker.patch('freqtrade.exchange.binance.Binance.additional_exchange_init')
class_mocker.patch('freqtrade.exchange.exchange.Exchange.load_cached_leverage_tiers',
return_value=None)
class_mocker.patch('freqtrade.exchange.exchange.Exchange.cache_leverage_tiers')

View File

@ -86,7 +86,7 @@ def test_use_SVM_to_remove_outliers_and_outlier_protection(mocker, freqai_conf,
freqai_conf['freqai']['feature_parameters'].update({"outlier_protection_percentage": 0.1})
freqai.dk.use_SVM_to_remove_outliers(predict=False)
assert log_has_re(
"SVM detected 8.09%",
"SVM detected 8.66%",
caplog,
)

View File

@ -207,10 +207,13 @@ async def test_emc_create_connection_invalid_port(default_conf, caplog, mocker):
})
dp = DataProvider(default_conf, None, None, None)
# Handle start explicitly to avoid messing with threading in tests
mocker.patch("freqtrade.rpc.external_message_consumer.ExternalMessageConsumer.start",)
emc = ExternalMessageConsumer(default_conf, dp)
try:
await asyncio.sleep(0.01)
emc._running = True
await emc._create_connection(emc.producers[0], asyncio.Lock())
assert log_has_re(r".+ is an invalid WebSocket URL .+", caplog)
finally:
emc.shutdown()

View File

@ -365,6 +365,14 @@ def test_exception_send_msg(default_conf, mocker, caplog):
with pytest.raises(NotImplementedError):
webhook.send_msg(msg)
# Test no failure for not implemented but known messagetypes
for e in RPCMessageType:
msg = {
'type': e,
'status': 'whatever'
}
webhook.send_msg(msg)
def test__send_msg(default_conf, mocker, caplog):
default_conf["webhook"] = get_webhook_dict()

View File

@ -28,6 +28,7 @@ from tests.conftest import (create_mock_trades, create_mock_trades_usdt, get_pat
from tests.conftest_trades import (MOCK_TRADE_COUNT, entry_side, exit_side, mock_order_1,
mock_order_2, mock_order_2_sell, mock_order_3, mock_order_3_sell,
mock_order_4, mock_order_5_stoploss, mock_order_6_sell)
from tests.conftest_trades_usdt import mock_trade_usdt_4
def patch_RPCManager(mocker) -> MagicMock:
@ -1060,6 +1061,7 @@ def test_add_stoploss_on_exchange(mocker, default_conf_usdt, limit_order, is_sho
freqtrade = FreqtradeBot(default_conf_usdt)
freqtrade.strategy.order_types['stoploss_on_exchange'] = True
# TODO: should not be magicmock
trade = MagicMock()
trade.is_short = is_short
trade.open_order_id = None
@ -1101,6 +1103,7 @@ def test_handle_stoploss_on_exchange(mocker, default_conf_usdt, fee, caplog, is_
# First case: when stoploss is not yet set but the order is open
# should get the stoploss order id immediately
# and should return false as no trade actually happened
# TODO: should not be magicmock
trade = MagicMock()
trade.is_short = is_short
trade.is_open = True
@ -1879,6 +1882,7 @@ def test_exit_positions(mocker, default_conf_usdt, limit_order, is_short, caplog
return_value=limit_order[entry_side(is_short)])
mocker.patch('freqtrade.exchange.Exchange.get_trades_for_order', return_value=[])
# TODO: should not be magicmock
trade = MagicMock()
trade.is_short = is_short
trade.open_order_id = '123'
@ -1902,6 +1906,7 @@ def test_exit_positions_exception(mocker, default_conf_usdt, limit_order, caplog
order = limit_order[entry_side(is_short)]
mocker.patch('freqtrade.exchange.Exchange.fetch_order', return_value=order)
# TODO: should not be magicmock
trade = MagicMock()
trade.is_short = is_short
trade.open_order_id = None
@ -2042,6 +2047,7 @@ def test_update_trade_state_exception(mocker, default_conf_usdt, is_short, limit
freqtrade = get_patched_freqtradebot(mocker, default_conf_usdt)
mocker.patch('freqtrade.exchange.Exchange.fetch_order', return_value=order)
# TODO: should not be magicmock
trade = MagicMock()
trade.open_order_id = '123'
trade.amount = 123
@ -2060,6 +2066,7 @@ def test_update_trade_state_orderexception(mocker, default_conf_usdt, caplog) ->
mocker.patch('freqtrade.exchange.Exchange.fetch_order',
MagicMock(side_effect=InvalidOrderException))
# TODO: should not be magicmock
trade = MagicMock()
trade.open_order_id = '123'
@ -2980,7 +2987,7 @@ def test_manage_open_orders_exception(default_conf_usdt, ticker_usdt, open_trade
@pytest.mark.parametrize("is_short", [False, True])
def test_handle_cancel_enter(mocker, caplog, default_conf_usdt, limit_order, is_short) -> None:
def test_handle_cancel_enter(mocker, caplog, default_conf_usdt, limit_order, is_short, fee) -> None:
patch_RPCManager(mocker)
patch_exchange(mocker)
l_order = limit_order[entry_side(is_short)]
@ -2994,16 +3001,12 @@ def test_handle_cancel_enter(mocker, caplog, default_conf_usdt, limit_order, is_
freqtrade = FreqtradeBot(default_conf_usdt)
freqtrade._notify_enter_cancel = MagicMock()
# TODO: Convert to real trade
trade = MagicMock()
trade.pair = 'LTC/USDT'
trade.open_rate = 200
trade.is_short = False
trade.entry_side = "buy"
trade.amount = 100
trade = mock_trade_usdt_4(fee, is_short)
Trade.query.session.add(trade)
Trade.commit()
l_order['filled'] = 0.0
l_order['status'] = 'open'
trade.nr_of_successful_entries = 0
reason = CANCEL_REASON['TIMEOUT']
assert freqtrade.handle_cancel_enter(trade, l_order, reason)
assert cancel_order_mock.call_count == 1
@ -3035,7 +3038,7 @@ def test_handle_cancel_enter(mocker, caplog, default_conf_usdt, limit_order, is_
@pytest.mark.parametrize("is_short", [False, True])
@pytest.mark.parametrize("limit_buy_order_canceled_empty", ['binance', 'ftx', 'kraken', 'bittrex'],
indirect=['limit_buy_order_canceled_empty'])
def test_handle_cancel_enter_exchanges(mocker, caplog, default_conf_usdt, is_short,
def test_handle_cancel_enter_exchanges(mocker, caplog, default_conf_usdt, is_short, fee,
limit_buy_order_canceled_empty) -> None:
patch_RPCManager(mocker)
patch_exchange(mocker)
@ -3046,11 +3049,10 @@ def test_handle_cancel_enter_exchanges(mocker, caplog, default_conf_usdt, is_sho
freqtrade = FreqtradeBot(default_conf_usdt)
reason = CANCEL_REASON['TIMEOUT']
# TODO: Convert to real trade
trade = MagicMock()
trade.nr_of_successful_entries = 0
trade.pair = 'LTC/ETH'
trade.entry_side = "sell" if is_short else "buy"
trade = mock_trade_usdt_4(fee, is_short)
Trade.query.session.add(trade)
Trade.commit()
assert freqtrade.handle_cancel_enter(trade, limit_buy_order_canceled_empty, reason)
assert cancel_order_mock.call_count == 0
assert log_has_re(
@ -3068,7 +3070,7 @@ def test_handle_cancel_enter_exchanges(mocker, caplog, default_conf_usdt, is_sho
'String Return value',
123
])
def test_handle_cancel_enter_corder_empty(mocker, default_conf_usdt, limit_order, is_short,
def test_handle_cancel_enter_corder_empty(mocker, default_conf_usdt, limit_order, is_short, fee,
cancelorder) -> None:
patch_RPCManager(mocker)
patch_exchange(mocker)
@ -3076,20 +3078,15 @@ def test_handle_cancel_enter_corder_empty(mocker, default_conf_usdt, limit_order
cancel_order_mock = MagicMock(return_value=cancelorder)
mocker.patch.multiple(
'freqtrade.exchange.Exchange',
cancel_order=cancel_order_mock
cancel_order=cancel_order_mock,
fetch_order=MagicMock(side_effect=InvalidOrderException)
)
freqtrade = FreqtradeBot(default_conf_usdt)
freqtrade._notify_enter_cancel = MagicMock()
# TODO: Convert to real trade
trade = MagicMock()
trade.pair = 'LTC/USDT'
trade.entry_side = "buy"
trade.open_rate = 200
trade.entry_side = "buy"
trade.open_order_id = "open_order_noop"
trade.nr_of_successful_entries = 0
trade.amount = 100
trade = mock_trade_usdt_4(fee, is_short)
Trade.query.session.add(trade)
Trade.commit()
l_order['filled'] = 0.0
l_order['status'] = 'open'
reason = CANCEL_REASON['TIMEOUT']
@ -3218,6 +3215,7 @@ def test_handle_cancel_exit_cancel_exception(mocker, default_conf_usdt) -> None:
freqtrade = FreqtradeBot(default_conf_usdt)
# TODO: should not be magicmock
trade = MagicMock()
reason = CANCEL_REASON['TIMEOUT']
order = {'remaining': 1,