tune_feature_learners(pipeline, population_table_training, population_table_validation, peripheral_tables=None, n_iter=0, score=None, num_threads=0)¶
A high-level interface for optimizing the feature learners of a
Efficiently optimizes the hyperparameters for the set of feature learners (from
getml.feature_learning) of a given pipeline by breaking each feature learner’s hyperparameter space down into carefully curated subspaces and optimizing the hyperparameters for each subspace in a sequential multi-step process. For further details about the actual recipes behind the tuning routines refer to tuning routines.
- pipeline (
Base pipeline used to derive all models fitted and scored during the hyperparameter optimization. It defines the data schema and any hyperparameters that are not optimized.
The population table that pipelines will be trained on.
The population table that pipelines will be evaluated on.
DataFrame, list or dict): The
peripheral tables used to provide additional information for the population tables.
- n_iter (int, optional):
The number of iterations.
- score (str, optional):
The score to optimize. Must be from
- num_threads (int, optional):
The number of parallel threads to use. If set to 0, the number of threads will be inferred.
- pipeline (
We assume that you have already set up your
Pipeline. Moreover, we assume that you have defined a training set and a validation set as well as the peripheral tables.
tuned_pipeline = getml.hyperopt.tune_feature_learners( pipeline=base_pipeline, population_table_training=training_set, population_table_validation=validation_set, peripheral_tables=peripheral_tables)
Pipelinecontaining tuned versions of the feature learners.
TypeError: If any instance variable is of a wrong type.