- getml.database.connect_hana(user: str, password: str, host: str, port: int = 39017, default_schema: str = 'public', ping_interval: int = 0, time_formats: Optional[List[str]] = None, conn_id: str = 'default')¶
Creates a new HANA database connection.
- user (str):
User name with which to log into the HANA database.
- password (str):
Password with which to log into the HANA database.
- host (str):
Host of the HANA database.
- port (int, optional):
Port of the database.
- default_schema (str, optional):
The schema within the database you want to connect use unless another schema is explicitly set.
- ping_interval (int, optional):
The interval at which you want to ping the database, in seconds. Set to 0 for no pings at all.
- time_formats (List[str], optional):
The list of formats tried when parsing time stamps.
The formats are allowed to contain the following special characters:
%w - abbreviated weekday (Mon, Tue, …)
%W - full weekday (Monday, Tuesday, …)
%b - abbreviated month (Jan, Feb, …)
%B - full month (January, February, …)
%d - zero-padded day of month (01 .. 31)
%e - day of month (1 .. 31)
%f - space-padded day of month ( 1 .. 31)
%m - zero-padded month (01 .. 12)
%n - month (1 .. 12)
%o - space-padded month ( 1 .. 12)
%y - year without century (70)
%Y - year with century (1970)
%H - hour (00 .. 23)
%h - hour (00 .. 12)
%a - am/pm
%A - AM/PM
%M - minute (00 .. 59)
%S - second (00 .. 59)
%s - seconds and microseconds (equivalent to %S.%F)
%i - millisecond (000 .. 999)
%c - centisecond (0 .. 9)
%F - fractional seconds/microseconds (000000 - 999999)
%z - time zone differential in ISO 8601 format (Z or +NN.NN)
%Z - time zone differential in RFC format (GMT or +NNNN)
%% - percent sign
- conn_id (str, optional):
The name to be used to reference the connection. If you do not pass anything, this will create a new default connection.
By selecting an existing table of your database in
from_db()function, you can create a new
DataFramecontaining all its data. Alternatively you can use the
read_query()methods to replace the content of the current
DataFrameinstance or append further rows based on either a table or a specific query.
You can also write your results back into the MySQL database. By passing the name for the destination table to
getml.Pipeline.transform(), the features generated from your raw data will be written back. Passing them into
getml.Pipeline.predict(), instead, makes predictions of the target variables to new, unseen data and stores the result into the corresponding table.