tune_predictors(pipeline: getml.pipeline.pipeline.Pipeline, container: getml.data.container.Container, train='train', validation='validation', n_iter=0, score=None, num_threads=0)[source]¶
A high-level interface for optimizing the predictors of a
Efficiently optimizes the hyperparameters for the set of predictors (from
getml.predictors) of a given pipeline by breaking each predictor’s hyperparameter space down into carefully curated subspaces and optimizing the hyperparameters for each subspace in a sequential multi-step process. For further details about the actual recipes behind the tuning routines refer to tuning routines.
- pipeline (
Base pipeline used to derive all models fitted and scored during the hyperparameter optimization. It defines the data schema and any hyperparameters that are not optimized.
- container (
The data container used for the hyperparameter tuning.
- train (str, optional):
The name of the subset in ‘container’ used for training.
- validation (str, optional):
The name of the subset in ‘container’ used for validation.
- n_iter (int, optional):
The number of iterations.
- score (str, optional):
The score to optimize. Must be from
- num_threads (int, optional):
The number of parallel threads to use. If set to 0, the number of threads will be inferred.
- pipeline (
We assume that you have already set up your
tuned_pipeline = getml.hyperopt.tune_predictors( pipeline=base_pipeline, container=container)
Pipelinecontaining tuned predictors.
Not supported in the getML community edition.