Source code for getml.feature_learning.relmt_model

# Copyright 2020 The SQLNet Company GmbH

# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:

# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.

"""
Feature learning based on Gradient Boosting.
"""

from .feature_learner import _FeatureLearner

from .loss_functions import SquareLoss

from .validation import (
    _validate_relboost_model_parameters
)

# --------------------------------------------------------------------


[docs]class RelMTModel(_FeatureLearner): """Feature learning based on relational linear model trees. :class:`~getml.feature_learning.RelMTModel` automates feature learning for relational data and time series. It is based on a generalization of linear model trees to relational data, hence the name. A linear model tree is a decision tree with linear models on its leaves. Args: allow_avg (bool, optional): Whether to allow an AVG aggregation. Particularly for time series problems, AVG aggregations are not necessary and you can save some time by taking them out. delta_t (float, optional): Frequency with which lag variables will be explored in a time series setting. When set to 0.0, there will be no lag variables. For more information, please refer to :ref:`data_model_time_series`. Range: [0, :math:`\\infty`] gamma (float, optional): During the training of RelMT, which is based on gradient tree boosting, this value serves as the minimum improvement in terms of the `loss_function` required for a split of the tree to be applied. Larger `gamma` will lead to fewer partitions of the tree and a more conservative algorithm. Range: [0, :math:`\\infty`] include_categorical (bool, optional): Whether you want to pass categorical columns from the population table to the `feature_selector` and `predictor`. Passing columns directly allows you to include handcrafted feature as well as raw data. Note, however, that this does not guarantee their presence in the resulting features because it is the task of the `feature_selector` to pick only the best performing ones. loss_function (:class:`~getml.models.loss_functions`, optional): Objective function used by the feature learning algorithm to optimize your features. For regression problems use :class:`~getml.models.loss_functions.SquareLoss` and for classification problems use :class:`~getml.models.loss_functions.CrossEntropyLoss`. max_depth (int, optional): Maximum depth of the trees generated during the gradient tree boosting. Deeper trees will result in more complex models and increase the risk of overfitting. Range: [0, :math:`\\infty`] min_num_samples (int, optional): Determines the minimum number of samples a subcondition should apply to in order for it to be considered. Higher values lead to less complex statements and less danger of overfitting. Range: [1, :math:`\\infty`] num_features (int, optional): Number of features generated by the feature learning algorithm. Range: [1, :math:`\\infty`] num_subfeatures (int, optional): The number of subfeatures you would like to extract in a subensemble (for snowflake data model only). See :ref:`data_model_snowflake_schema` for more information. Range: [1, :math:`\\infty`] num_threads (int, optional): Number of threads used by the feature learning algorithm. If set to zero or a negative value, the number of threads will be determined automatically by the getML engine. Range: [-:math:`\\infty`, :math:`\\infty`] reg_lambda (float, optional): L2 regularization on the weights in the gradient boosting routine. This is one of the most important hyperparameters in the :class:`~getml.feature_learning.RelMTModel` as it allows for the most direct regularization. Larger values will make the resulting model more conservative. Range: [0, :math:`\\infty`] sampling_factor (float, optional): RelMT uses a bootstrapping procedure (sampling with replacement) to train each of the features. The sampling factor is proportional to the share of the samples randomly drawn from the population table every time RelMT generates a new feature. A lower sampling factor (but still greater than 0.0), will lead to less danger of overfitting, less complex statements and faster training. When set to 1.0, roughly 20,000 samples are drawn from the population table. If the population table contains less than 20,000 samples, it will use standard bagging. When set to 0.0, there will be no sampling at all. Range: [0, :math:`\\infty`] seed (Union[int,None], optional): Seed used for the random number generator that underlies the sampling procedure to make the calculation reproducible. Internally, a `seed` of None will be mapped to 5543. Range: [0, :math:`\\infty`] shrinkage (float, optional): Since RelMT works using a gradient-boosting-like algorithm, `shrinkage` (or learning rate) scales down the weights and thus the impact of each new tree. This gives more room for future ones to improve the overall performance of the model in this greedy algorithm. It must be between 0.0 and 1.0 with higher values leading to more danger of overfitting. Range: [0, 1] silent (bool, optional): Controls the logging during training. use_timestamps (bool, optional): Whether you want to ignore all elements in the peripheral tables where the time stamp is greater than the one in the corresponding elements of the population table. In other words, this determines whether you want add the condition .. code-block:: sql t2.time_stamp <= t1.time_stamp at the very end of each feature. It is strongly recommend to enable this behavior. Raises: TypeError: If any of the input arguments is of wrong type. KeyError: If an unsupported instance variable is encountered. TypeError: If any instance variable is of wrong type. ValueError: If any instance variable does not match its possible choices (string) or is out of the expected bounds (numerical). Example: .. code-block:: python population_placeholder = getml.data.Placeholder("population") order_placeholder = getml.data.Placeholder("order") trans_placeholder = getml.data.Placeholder("trans") population_placeholder.join(order_placeholder, join_key="account_id") population_placeholder.join(trans_placeholder, join_key="account_id", time_stamp="date") feature_selector = getml.predictors.XGBoostClassifier( reg_lambda=500 ) predictor = getml.predictors.XGBoostClassifier( reg_lambda=500 ) feature_learner = getml.feature_learning.RelMTModel( num_features=60, loss_function=getml.feature_learning.loss_functions.CrossEntropyLoss ) pipe = getml.pipeline.Pipeline( tags=["relmt", "31 features"], population=population_placeholder, peripheral=[order_placeholder, trans_placeholder], feature_learners=feature_learner, feature_selectors=feature_selector, predictors=predictor, share_selected_features=0.5 ) pipe.check( population_table=population_train, peripheral_tables={"order": order, "trans": trans} ) pipe = pipe.fit( population_table=population_train, peripheral_tables={"order": order, "trans": trans} ) in_sample = pipe.score( population_table=population_train, peripheral_tables={"order": order, "trans": trans} ) out_of_sample = pipe.score( population_table=population_test, peripheral_tables={"order": order, "trans": trans} ) """ # ------------------------------------------------------------ def __init__(self, allow_avg=True, delta_t=0.0, gamma=0.0, loss_function=SquareLoss, max_depth=2, min_num_samples=1, num_features=30, num_subfeatures=30, num_threads=0, reg_lambda=0.0, sampling_factor=1.0, seed=None, shrinkage=0.1, silent=True, use_timestamps=True): # ------------------------------------------------------------ self.type = "RelMTModel" # ------------------------------------------------------------ self.allow_avg = allow_avg self.delta_t = delta_t self.gamma = gamma self.loss_function = loss_function self.max_depth = max_depth self.min_num_samples = min_num_samples self.num_features = num_features self.num_subfeatures = num_subfeatures self.num_threads = num_threads self.reg_lambda = reg_lambda self.sampling_factor = sampling_factor self.seed = seed or 5543 self.shrinkage = shrinkage self.silent = silent self.use_timestamps = use_timestamps # ------------------------------------------------------------ RelMTModel._supported_params = list(self.__dict__.keys()) # ------------------------------------------------------------ self.validate() # ----------------------------------------------------------------
[docs] def validate(self, params=None): """Checks both the types and the values of all instance variables and raises an exception if something is off. Args: params (dict, optional): A dictionary containing the parameters to validate. If not is passed, the own parameters will be validated. Raises: KeyError: If an unsupported instance variable is encountered. TypeError: If any instance variable is of wrong type. ValueError: If any instance variable does not match its possible choices (string) or is out of the expected bounds (numerical). """ # ------------------------------------------------------------ params = params or self.__dict__ if not isinstance(params, dict): raise ValueError("params must be None or a dictionary!") # ------------------------------------------------------------ for kkey in params: if kkey not in RelMTModel._supported_params: raise KeyError( """Instance variable [""" + kkey + """] is not supported in RelMTModel.""") # ------------------------------------------------------------ if not isinstance(params["silent"], bool): raise TypeError("'silent' must be of type bool") if params["type"] != "RelMTModel": raise ValueError("'type' must be 'RelMTModel'") # ------------------------------------------------------------ _validate_relboost_model_parameters(**params)
# ------------------------------------------------------------