FastPropTimeSeries

class getml.feature_learning.FastPropTimeSeries(aggregation: List[str] = <factory>, loss_function: str = 'SquareLoss', min_df: int = 30, n_most_frequent: int = 0, num_features: int = 200, num_threads: int = 0, sampling_factor: float = 1.0, silent: bool = True, vocab_size: int = 500, horizon: float = 0.0, memory: float = 0.0, self_join_keys: List[str] = <factory>, ts_name: str = '', allow_lagged_targets: bool = False)[source]

Generates simple features based on propositionalization.

FastPropTimeSeries generates simple and easily interpretable features for relational data and time series. It is based on a propositionalization approach and has been optimized for speed and memory efficiency. FastPropTimeSeries generates a large number of features and selects the most relevant ones based on the pair-wise correlation with the target(s).

It is recommended to combine FastPropTimeSeries with the Mapping and Seasonal preprocessors, which can drastically improve predictive accuracy.

Args:

horizon (float, optional):

The period of time you want to look ahead to generate the predictions.

memory (float, optional):

The period of time you want to the to look back until the algorithm “forgets” the data. If you set memory to 0.0, then there will be no limit.

self_join_keys (List[str], optional):

A list of the join keys to use for the self join. If none are passed, then the self join will take place on the entire population table.

ts_name (str, optional):

The name of the time stamp column to be used. If none is passed, then the row ID will be used.

allow_lagged_targets (bool, optional):

In some time series problems, it is allowed to aggregate over target variables from the past. In others, this is not allowed. If allow_lagged_targets is set to True, you must pass a horizon that is greater than zero, otherwise you would have a data leak (an exception will be thrown to prevent this).

aggregation (List[aggregations], optional):

Mathematical operations used by the automated feature learning algorithm to create new features.

Must be from aggregations.

loss_function (loss_functions, optional):

Objective function used by the feature learning algorithm to optimize your features. For regression problems use SquareLoss and for classification problems use CrossEntropyLoss.

min_df (int, optional):

Only relevant for columns with role text. The minimum number of fields (i.e. rows) in text column a given word is required to appear in to be included in the bag of words. Range: [1, \(\infty\)]

num_features (int, optional):

Number of features generated by the feature learning algorithm. Range: [1, \(\infty\)]

n_most_frequent (int, optional):

FastPropModel can find the N most frequent categories in a categorical column and derive features from them. The parameter determines how many categories should be used. Range: [0, \(\infty\)]

num_threads (int, optional):

Number of threads used by the feature learning algorithm. If set to zero or a negative value, the number of threads will be determined automatically by the getML engine. Range: [\(0\), \(\infty\)]

sampling_factor (float, optional):

FastProp uses a bootstrapping procedure (sampling with replacement) to train each of the features. The sampling factor is proportional to the share of the samples randomly drawn from the population table every time Multirel generates a new feature. A lower sampling factor (but still greater than 0.0), will lead to less danger of overfitting, less complex statements and faster training. When set to 1.0, roughly 2,000 samples are drawn from the population table. If the population table contains less than 2,000 samples, it will use standard bagging. When set to 0.0, there will be no sampling at all. Range: [0, \(\infty\)]

silent (bool, optional):

Controls the logging during training.

vocab_size (int, optional):

Determines the maximum number of words that are extracted in total from getml.data.roles.text columns. This can be interpreted as the maximum size of the bag of words. Range: [0, \(\infty\)]

Example:

# Our forecast horizon is 0.
# We do not predict the future, instead we infer
# the present state from current and past sensor data.
horizon = 0.0

# We do not allow the time series features
# to use target values from the past.
# (Otherwise, we would need the horizon to
# be greater than 0.0).
allow_lagged_targets = False

# We want our time series features to only use
# data from the last 15 minutes
memory = getml.data.time.minutes(15)

feature_learner = getml.feature_learning.FastPropTimeSeries(
        ts_name="date",
        horizon=horizon,
        memory=memory,
        allow_lagged_targets=allow_lagged_targets,
        loss_function=getml.feature_learning.loss_functions.CrossEntropyLoss
)

predictor = getml.predictors.XGBoostClassifier(reg_lambda=500)

pipe = getml.pipeline.Pipeline(
    tags=["memory=15", "no ts_name", "dfs"],
    feature_learners=[feature_learner],
    predictors=[predictor]
)

pipe.check(data_train)

pipe = pipe.fit(data_train)

predictions = pipe.predict(data_test)

scores = pipe.score(data_test)

Methods

validate([params])

Checks both the types and the values of all instance variables and raises an exception if something is off.

Attributes

agg_sets

allow_lagged_targets

horizon

loss_function

memory

min_df

n_most_frequent

num_features

num_threads

sampling_factor

silent

ts_name

type

vocab_size