For the sample below, you then need to add the command line parameter `--hyperopt-loss SuperDuperHyperOptLoss` to your hyperopt call so this function is being used.
A sample of this can be found below, which is identical to the Default Hyperopt loss implementation. A full sample can be found in [userdata/hyperopts](https://github.com/freqtrade/freqtrade/blob/develop/freqtrade/templates/sample_hyperopt_loss.py).
*`backtest_stats`: Backtesting statistics using the same format as the backtesting file "strategy" substructure. Available fields can be seen in `generate_strategy_stats()` in `optimize_reports.py`.
This function needs to return a floating point number (`float`). Smaller numbers will be interpreted as better results. The parameters and balancing for this is up to you.
To override a pre-defined space (`roi_space`, `generate_roi_table`, `stoploss_space`, `trailing_space`), define a nested class called Hyperopt and define the required spaces as follows:
You can define your own estimator for Hyperopt by implementing `generate_estimator()` in the Hyperopt subclass.
```python
class MyAwesomeStrategy(IStrategy):
class HyperOpt:
def generate_estimator():
return "RF"
```
Possible values are either one of "GP", "RF", "ET", "GBRT" (Details can be found in the [scikit-optimize documentation](https://scikit-optimize.github.io/)), or "an instance of a class that inherits from `RegressorMixin` (from sklearn) and where the `predict` method has an optional `return_std` argument, which returns `std(Y | x)` along with `E[Y | x]`".
Some research will be necessary to find additional Regressors.
Example for `ExtraTreesRegressor` ("ET") with additional parameters:
```python
class MyAwesomeStrategy(IStrategy):
class HyperOpt:
def generate_estimator():
from skopt.learning import ExtraTreesRegressor
# Corresponds to "ET" - but allows additional parameters.
return ExtraTreesRegressor(n_estimators=100)
```
!!! Note
While custom estimators can be provided, it's up to you as User to do research on possible parameters and analyze / understand which ones should be used.
If you're unsure about this, best use one of the Defaults (`"ET"` has proven to be the most versatile) without further parameters.
For the additional spaces, scikit-optimize (in combination with Freqtrade) provides the following space types:
*`Categorical` - Pick from a list of categories (e.g. `Categorical(['a', 'b', 'c'], name="cat")`)
*`Integer` - Pick from a range of whole numbers (e.g. `Integer(1, 10, name='rsi')`)
*`SKDecimal` - Pick from a range of decimal numbers with limited precision (e.g. `SKDecimal(0.1, 0.5, decimals=3, name='adx')`). *Available only with freqtrade*.
*`Real` - Pick from a range of decimal numbers with full precision (e.g. `Real(0.1, 0.5, name='adx')`
You can import all of these from `freqtrade.optimize.space`, although `Categorical`, `Integer` and `Real` are only aliases for their corresponding scikit-optimize Spaces. `SKDecimal` is provided by freqtrade for faster optimizations.
``` python
from freqtrade.optimize.space import Categorical, Dimension, Integer, SKDecimal, Real # noqa
```
!!! Hint "SKDecimal vs. Real"
We recommend to use `SKDecimal` instead of the `Real` space in almost all cases. While the Real space provides full accuracy (up to ~16 decimal places) - this precision is rarely needed, and leads to unnecessary long hyperopt times.
Assuming the definition of a rather small space (`SKDecimal(0.10, 0.15, decimals=2, name='xxx')`) - SKDecimal will have 5 possibilities (`[0.10, 0.11, 0.12, 0.13, 0.14, 0.15]`).
A corresponding real space `Real(0.10, 0.15 name='xxx')` on the other hand has an almost unlimited number of possibilities (`[0.10, 0.010000000001, 0.010000000002, ... 0.014999999999, 0.01500000000]`).