tune_feature_learners¶
- getml.hyperopt.tune_feature_learners(pipeline, container, train='train', validation='validation', n_iter=0, score=None, num_threads=0)[source]¶
A high-level interface for optimizing the feature learners of a
Pipeline
.Efficiently optimizes the hyperparameters for the set of feature learners (from
feature_learning
) of a given pipeline by breaking each feature learner’s hyperparameter space down into carefully curated subspaces and optimizing the hyperparameters for each subspace in a sequential multi-step process. For further details about the actual recipes behind the tuning routines refer to tuning routines.- Args:
- pipeline (
Pipeline
): Base pipeline used to derive all models fitted and scored during the hyperparameter optimization. It defines the data schema and any hyperparameters that are not optimized.
- container (
Container
): The data container used for the hyperparameter tuning.
- train (str, optional):
The name of the subset in ‘container’ used for training.
- validation (str, optional):
The name of the subset in ‘container’ used for validation.
- n_iter (int, optional):
The number of iterations.
- score (str, optional):
The score to optimize. Must be from
metrics
.- num_threads (int, optional):
The number of parallel threads to use. If set to 0, the number of threads will be inferred.
- pipeline (
- Example:
We assume that you have already set up your
Pipeline
andContainer
.tuned_pipeline = getml.hyperopt.tune_predictors( pipeline=base_pipeline, container=container)
- Returns:
A
Pipeline
containing tuned versions of the feature learners.- Note:
Not supported in the getML community edition.