to_sql

Features.to_sql(targets: bool = True, subfeatures: bool = True, dialect: Literal[bigquery, mysql, postgres, spark sql, sqlite3, tsql] = 'sqlite3', schema: Optional[str] = None, nchar_categorical: int = 128, nchar_join_key: int = 128, nchar_text: int = 4096) → getml.pipeline.sql_code.SQLCode[source]

Returns SQL statements visualizing the features.

Args:
targets (boolean):

Whether you want to include the target columns in the main table.

subfeatures (boolean):

Whether you want to include the code for the subfeatures of a snowflake schema.

dialect (string):

The SQL dialect to use. Must be from dialect.

schema (string, optional):

The schema in which to wrap all generated tables and indices. None for no schema. Not applicable to all dialects. For the BigQuery and MySQL dialects, the schema is identical to the database ID.

nchar_categorical (int):

The maximum number of characters used in the VARCHAR for categorical columns. Not applicable to all dialects.

nchar_join_key (int):

The maximum number of characters used in the VARCHAR for join keys. Not applicable to all dialects.

nchar_text (int):

The maximum number of characters used in the VARCHAR for text columns. Not applicable to all dialects.

Examples:

my_pipeline.features.to_sql()
Returns:
SQLCode

Object representing the features.

Note:

Only fitted pipelines (fit()) can hold trained features which can be returned as SQL statements. The dialect is based on the SQLite3 standard.