STATSML 600: Data Distiller Advanced Statistics & Machine Learning Models
Discover advanced statistics and machine learning functions to build predictive models
Last updated
Discover advanced statistics and machine learning functions to build predictive models
Last updated
Use case tutorials are here:
Data Distiller users need a convenient way to generate data insights to predict the best strategies for targeting users across various use cases. They want the ability to predict a user's likelihood of buying a specific product, estimate the quantity they may purchase, and identify which products are most likely to be bought. Currently, there is no option to leverage machine learning algorithms directly through SQL to produce predictive insights from the data.
With the introduction of statistical functions such as CREATE MODEL
, MODEL_EVALUATE
, and MODEL_PREDICT
, Data Distiller users will gain the capability to create predictive insights from data stored in the lake. This three-step querying process enables them to easily generate actionable insights from their data.
Data Distiller's statistics and ML capabilities can play a crucial role in augmenting full-scale ML platforms like Databricks, Google Cloud AI Platform, Azure Machine Learning and Amazon SageMaker, providing valuable support for the end-to-end machine learning workflow. Here's how these features could be leveraged:
Quick Prototyping: The ability to use SQL-based ML models and transformations allows data scientists and engineers to quickly prototype models and test different features without setting up complex ML pipelines. This rapid iteration is particularly valuable in the early stages of feature engineering and model development.
Feature Validation: By experimenting with various feature transformations and basic models within Data Distiller, users can validate the quality and impact of different features. This ensures that only the most relevant features are sent for training in full-scale ML platforms, thereby optimizing model performance.
Efficient Feature Processing: Data Distiller's built-in transformers (e.g., vector assemblers, scalers, and encoders) can be used for feature engineering and data preprocessing steps. This enables seamless integration with platforms by preparing the data in a format that is ready for advanced model training.
Automated Feature Selection: With basic statistical and machine learning capabilities, Data Distiller can help automate feature selection by running simple models to identify the most predictive features before moving to a full-scale ML environment.
Cost-Effective Experimentation: By using Data Distiller to conduct initial model experiments and transformations, teams can avoid the high costs associated with running large-scale ML jobs on platforms. This is particularly useful when working with large datasets or conducting frequent iterations.
Integrated Workflow: Once features and models are validated in Data Distiller, the results can be easily transferred to the machine learning platform for full-scale training. This integrated approach streamlines the development process, reducing the time needed for data preparation and experimentation.
Feature Prototyping: Data Distiller can serve as a testing ground for new features and transformations. For example, users can build basic predictive models or clustering algorithms to understand the potential of different features before moving to more complex models on Databricks or SageMaker.
Model Evaluation and Validation: Basic model evaluation (e.g., classification accuracy, regression metrics) within Data Distiller can help identify promising feature sets. These insights can guide further tuning and training in full-scale ML environments, reducing the need for costly experiments.
Modular Approach: Design Data Distiller processes to produce well-defined outputs that can be easily integrated into downstream ML workflows. For instance, transformed features and initial model insights can be exported as data artifacts for further training.
Continuous Learning Loop: Use the insights from Data Distiller to inform feature engineering strategies. This iterative loop ensures that the models trained on full-scale platforms are built on well-curated and optimized data.
Data Distiller supports various advanced statistics and machine learning operations through SQL commands, enabling users to:
Create models
Evaluate models
Make predictions
The steps above describe the following:
Source Data: The process begins with the available source data, which serves as the input for training the machine learning model.
CREATE MODEL
Using Training Data: A predictive model is created using the training data. This step involves selecting the appropriate machine learning algorithm and training it to learn patterns from the data.
MODEL_EVALUATE
to Check the Accuracy of the Model: The trained model is then evaluated to measure its accuracy and ensure it performs well on unseen data. This step helps validate the model's effectiveness.
MODEL_PREDICT
to Make Predictions on New Data: Once the model's accuracy is verified, it is used to make predictions on new, unseen data, generating predictive insights.
Output Prediction Data: Finally, the predictions are outputted, providing actionable insights based on the processed data.
Linear Regression: Fits a linear relationship between features and a target variable.
Decision Tree Regression: Uses a tree structure to model and predict continuous values.
Random Forest Regression: An ensemble of decision trees that predicts the average output.
Gradient Boosted Tree Regression: Uses an ensemble of trees to minimize prediction error iteratively.
Generalized Linear Regression: Extends linear regression to model non-normal target distributions.
Isotonic Regression: Fits a non-decreasing or non-increasing sequence to the data.
Survival Regression: Models time-to-event data based on the Weibull distribution.
Factorization Machines Regression: Models interactions between features, making it suitable for sparse datasets and high-dimensional data.
Logistic Regression: Predicts probabilities for binary or multiclass classification problems.
Decision Tree Classifier: Uses a tree structure to classify data into distinct categories.
Random Forest Classifier: An ensemble of decision trees that classifies data based on majority voting.
Naive Bayes Classifier: Uses Bayes' theorem with strong independence assumptions between features.
Factorization Machines Classifier: Models interactions between features for classification, making it suitable for sparse and high-dimensional data.
Linear Support Vector Classifier (LinearSVC): Constructs a hyperplane for binary classification tasks, maximizing the margin between classes.
Multilayer Perceptron Classifier: A neural network classifier with multiple layers for mapping inputs to outputs using an activation function.
K-Means: Partitions data into k clusters based on distance to cluster centroids.
Bisecting K-Means: Uses a hierarchical divisive approach for clustering.
Gaussian Mixture: Models data as a mixture of multiple Gaussian distributions.
Latent Dirichlet Allocation (LDA): Identifies topics in a collection of text documents.
Use the CREATE MODEL
command to define a new machine learning model.
In this example:
MODEL_TYPE
specifies the algorithm.
MAX_ITER
sets the number of iterations.
REG_PARAM
is the regularization parameter.
Note that the syntax does not support reading from a TEMP
table and does not allow for braces such as:
The TRANSFORM
clause allows you to preprocess features before training.
This example demonstrates:
Binarizing a numeric feature.
Indexing a categorical feature.
Assembling multiple features into a vector.
Feature transformation is the process of extracting meaningful features from raw data to enhance the accuracy of downstream statistical models. The Data Distiller feature engineering SQL extension provides a comprehensive suite of techniques that streamline and automate data preprocessing. These functions allow for seamless, efficient data preparation and enable easy experimentation with various feature engineering methods. Designed for distributed computing, the SQL extension supports feature engineering on large datasets in a parallel and scalable manner, significantly reducing the time needed for preprocessing.
Feature transformation is broadly used for the following purposes:
Extraction: Extracts important information from data columns, helping models to identify key signals. For example, in textual data, long sentences may contain irrelevant words that need to be removed to improve model performance.
Transformation: Converts raw data into a format that machine learning models can consume. Since models understand numbers but not text, transformers are used to convert non-numerical data into numerical features.
Define custom preprocessing steps using the TRANSFORM
clause.
If the TRANSFORM
clause is omitted, Data Distiller performs basic preprocessing.
Several transformers can be used for feature engineering:
Description: Fills missing numeric values using a specified strategy such as "mean," "median," or "mode."
Example:
Description: Replaces missing string values with a specified string.
Example:
Description: Completes missing values in a boolean column using a specified boolean value.
Example:
Description: Combines multiple columns into a single vector column. Useful for creating feature vectors from multiple features.
Example:
Description: Converts a numeric column to a binary value (0 or 1) based on a specified threshold.
Example:
Description: Splits a continuous numeric column into discrete bins based on specified thresholds.
Example:
Description: Converts a column of strings into a column of indexed numerical values, typically used for categorical features.
Example:
Description: Converts categorical features represented as indices into a one-hot encoded vector.
Example:
Description: Standardizes a numeric column by removing the mean and scaling to unit variance.
Example:
Description: Scales a numeric column to a specified range, typically [0, 1].
Example:
Description: Scales a numeric column by dividing each value by the maximum absolute value in that column.
Example:
Description: Normalizes a vector to have unit norm, typically used for scaling individual samples.
Example:
Description: Expands a vector of features into a polynomial feature space.
Example:
Description: Selects the top features based on the Chi-Square test of independence.
Example:
Description: Reduces the dimensionality of the data by projecting it onto a lower-dimensional subspace.
Example:
Description: Converts categorical features into numerical features using the hashing trick, resulting in a fixed-length feature vector.
Example:
Description: Removes common stop words from a column of text data.
Example:
Description: Converts a column of text data into a sequence of n-grams.
Example:
Description: Splits a string column into a list of words.
Example:
Description: TF-IDF is a statistic that reflects how important a word is to a document within a collection or corpus. It is widely used in text mining and natural language processing to transform text data into numerical features. Given a term ttt, a document ddd, and a corpus DDD:
Term Frequency (TF) measures the frequency of a term in a document is the number of times term ttt appears in document ddd.
Document Frequency (DF) counts how many documents contain the term is the number of documents in the corpus DDD that include the term ttt.
Using only term frequency can overemphasize terms that appear frequently but carry little meaningful information (e.g., "a," "the," "of"). TF-IDF addresses this by weighting terms inversely proportional to their frequency across the corpus, thus highlighting terms that are more informative for a particular document.
Example:
TF-IDF helps in converting a collection of text documents into a matrix of numerical features that can be used as input for machine learning models. It is particularly useful for feature extraction in text classification tasks, sentiment analysis, and information retrieval.
Description: Word2Vec
is an estimator that takes sequences of words representing documents and trains a Word2VecModel
. The model maps each word to a unique fixed-size vector in a continuous vector space. The Word2VecModel then transforms each document into a vector by averaging the vectors of all the words in the document. This technique is widely used in natural language processing (NLP) tasks to capture the semantic meaning of words and represent them in a numerical format suitable for machine learning models.
Example:
In this example:
The tokenizer
transformer splits the input text into individual words.
The word2vec
transformer generates a fixed-size vector (with a specified size of 10) for each word in the sequence and computes the average vector for all words in the document.
Word2Vec
is commonly used to convert text data into numerical features, allowing machine learning algorithms to process textual information while capturing semantic relationships between words.
Description: The CountVectorizer is used to convert a collection of text documents into vectors of token counts. It generates sparse representations for the documents based on the vocabulary, allowing further processing by algorithms such as Latent Dirichlet Allocation (LDA) and other text analysis techniques. The output is a sparse vector where the value of each element represents the count of a term in the document.
Input Data Type: array[string]
Output Data Type: Sparse vector
Parameters:
VOCAB_SIZE
: The maximum size of the vocabulary. The CountVectorizer will build a vocabulary that considers only the top vocabSize
terms, ordered by term frequency across the corpus.
MIN_DOC_FREQ
: Specifies the minimum number of different documents a term must appear in to be included in the vocabulary. If set as an integer, it indicates the number of documents; if a double in [0,1)
, it indicates a fraction of documents.
MAX_DOC_FREQ
: Specifies the maximum number of different documents a term could appear in to be included in the vocabulary. Terms appearing more than the threshold are ignored. If set as an integer, it indicates the maximum number of documents; if a double in [0,1)
, it indicates the maximum fraction of documents.
MIN_TERM_FREQ
: Filters out rare words in a document. Terms with a frequency lower than the threshold in a document are ignored. If an integer, it specifies the count; if a double in [0,1)
, it specifies a fraction.
Example:
Set hyper-parameters using the OPTIONS
clause to optimize model performance.
Example:
Use vector assemblers to combine related features.
Perform feature scaling (e.g., normalization) where applicable.
Choose models based on the problem type (e.g., classification vs. regression).
The detailed list is here
The detailed list is here
The detailed list is here
Category | Algorithm | Description |
---|---|---|
Transformer | Description | Example |
---|---|---|
Algorithm | Parameter | Description | Default Value | Possible Values |
---|---|---|---|---|
Algorithm | Parameter | Description | Default Value | Possible Values |
---|---|---|---|---|
Algorithm | Parameter | Description | Default Value | Possible Values |
---|---|---|---|---|