STATSML 600: Data Distiller Advanced Statistics & Machine Learning Models
Discover advanced statistics and machine learning functions to build predictive models
Last updated
Discover advanced statistics and machine learning functions to build predictive models
Last updated
Use case tutorials are here:
Data Distiller users need a convenient way to generate data insights to predict the best strategies for targeting users across various use cases. They want the ability to predict a user's likelihood of buying a specific product, estimate the quantity they may purchase, and identify which products are most likely to be bought. Currently, there is no option to leverage machine learning algorithms directly through SQL to produce predictive insights from the data.
With the introduction of statistical functions such as CREATE MODEL
, MODEL_EVALUATE
, and MODEL_PREDICT
, Data Distiller users will gain the capability to create predictive insights from data stored in the lake. This three-step querying process enables them to easily generate actionable insights from their data.
Data Distiller's statistics and ML capabilities can play a crucial role in augmenting full-scale ML platforms like Databricks, Google Cloud AI Platform, Azure Machine Learning and Amazon SageMaker, providing valuable support for the end-to-end machine learning workflow. Here's how these features could be leveraged:
Quick Prototyping: The ability to use SQL-based ML models and transformations allows data scientists and engineers to quickly prototype models and test different features without setting up complex ML pipelines. This rapid iteration is particularly valuable in the early stages of feature engineering and model development.
Feature Validation: By experimenting with various feature transformations and basic models within Data Distiller, users can validate the quality and impact of different features. This ensures that only the most relevant features are sent for training in full-scale ML platforms, thereby optimizing model performance.
Efficient Feature Processing: Data Distiller's built-in transformers (e.g., vector assemblers, scalers, and encoders) can be used for feature engineering and data preprocessing steps. This enables seamless integration with platforms by preparing the data in a format that is ready for advanced model training.
Automated Feature Selection: With basic statistical and machine learning capabilities, Data Distiller can help automate feature selection by running simple models to identify the most predictive features before moving to a full-scale ML environment.
Cost-Effective Experimentation: By using Data Distiller to conduct initial model experiments and transformations, teams can avoid the high costs associated with running large-scale ML jobs on platforms. This is particularly useful when working with large datasets or conducting frequent iterations.
Integrated Workflow: Once features and models are validated in Data Distiller, the results can be easily transferred to the machine learning platform for full-scale training. This integrated approach streamlines the development process, reducing the time needed for data preparation and experimentation.
Feature Prototyping: Data Distiller can serve as a testing ground for new features and transformations. For example, users can build basic predictive models or clustering algorithms to understand the potential of different features before moving to more complex models on Databricks or SageMaker.
Model Evaluation and Validation: Basic model evaluation (e.g., classification accuracy, regression metrics) within Data Distiller can help identify promising feature sets. These insights can guide further tuning and training in full-scale ML environments, reducing the need for costly experiments.
Modular Approach: Design Data Distiller processes to produce well-defined outputs that can be easily integrated into downstream ML workflows. For instance, transformed features and initial model insights can be exported as data artifacts for further training.
Continuous Learning Loop: Use the insights from Data Distiller to inform feature engineering strategies. This iterative loop ensures that the models trained on full-scale platforms are built on well-curated and optimized data.
Data Distiller supports various advanced statistics and machine learning operations through SQL commands, enabling users to:
Create models
Evaluate models
Make predictions
The steps above describe the following:
Source Data: The process begins with the available source data, which serves as the input for training the machine learning model.
CREATE MODEL
Using Training Data: A predictive model is created using the training data. This step involves selecting the appropriate machine learning algorithm and training it to learn patterns from the data.
MODEL_EVALUATE
to Check the Accuracy of the Model: The trained model is then evaluated to measure its accuracy and ensure it performs well on unseen data. This step helps validate the model's effectiveness.
MODEL_PREDICT
to Make Predictions on New Data: Once the model's accuracy is verified, it is used to make predictions on new, unseen data, generating predictive insights.
Output Prediction Data: Finally, the predictions are outputted, providing actionable insights based on the processed data.
Linear Regression: Fits a linear relationship between features and a target variable.
Decision Tree Regression: Uses a tree structure to model and predict continuous values.
Random Forest Regression: An ensemble of decision trees that predicts the average output.
Gradient Boosted Tree Regression: Uses an ensemble of trees to minimize prediction error iteratively.
Generalized Linear Regression: Extends linear regression to model non-normal target distributions.
Isotonic Regression: Fits a non-decreasing or non-increasing sequence to the data.
Survival Regression: Models time-to-event data based on the Weibull distribution.
Factorization Machines Regression: Models interactions between features, making it suitable for sparse datasets and high-dimensional data.
Logistic Regression: Predicts probabilities for binary or multiclass classification problems.
Decision Tree Classifier: Uses a tree structure to classify data into distinct categories.
Random Forest Classifier: An ensemble of decision trees that classifies data based on majority voting.
Naive Bayes Classifier: Uses Bayes' theorem with strong independence assumptions between features.
Factorization Machines Classifier: Models interactions between features for classification, making it suitable for sparse and high-dimensional data.
Linear Support Vector Classifier (LinearSVC): Constructs a hyperplane for binary classification tasks, maximizing the margin between classes.
Multilayer Perceptron Classifier: A neural network classifier with multiple layers for mapping inputs to outputs using an activation function.
K-Means: Partitions data into k clusters based on distance to cluster centroids.
Bisecting K-Means: Uses a hierarchical divisive approach for clustering.
Gaussian Mixture: Models data as a mixture of multiple Gaussian distributions.
Latent Dirichlet Allocation (LDA): Identifies topics in a collection of text documents.
Regression (Supervised)
Linear Regression
Fits a linear relationship between features and a target variable.
Decision Tree Regression
Uses a tree structure to model and predict continuous values.
Random Forest Regression
An ensemble of decision trees that predicts the average output.
Gradient Boosted Tree Regression
Uses an ensemble of trees to minimize prediction error iteratively.
Generalized Linear Regression
Extends linear regression to model non-normal target distributions.
Isotonic Regression
Fits a non-decreasing or non-increasing sequence to the data.
Survival Regression
Models time-to-event data based on the Weibull distribution.
Factorization Machines Regression
Models interactions between features, making it suitable for sparse datasets and high-dimensional data.
Classification (Supervised)
Logistic Regression
Predicts probabilities for binary or multiclass classification problems.
Decision Tree Classifier
Uses a tree structure to classify data into distinct categories.
Random Forest Classifier
An ensemble of decision trees that classifies data based on majority voting.
Naive Bayes Classifier
Uses Bayes' theorem with strong independence assumptions between features.
Factorization Machines Classifier
Models interactions between features for classification, suitable for sparse and high-dimensional data.
Linear Support Vector Classifier (LinearSVC)
Constructs a hyperplane for binary classification tasks, maximizing the margin between classes.
Multilayer Perceptron Classifier
A neural network classifier with multiple layers for mapping inputs to outputs using an activation function.
Unsupervised
K-Means
Partitions data into k clusters based on distance to cluster centroids.
Bisecting K-Means
Uses a hierarchical divisive approach for clustering.
Gaussian Mixture
Models data as a mixture of multiple Gaussian distributions.
Latent Dirichlet Allocation (LDA)
Identifies topics in a collection of text documents.
Use the CREATE MODEL
command to define a new machine learning model.
In this example:
MODEL_TYPE
specifies the algorithm.
MAX_ITER
sets the number of iterations.
REG_PARAM
is the regularization parameter.
The TRANSFORM
clause allows you to preprocess features before training.
This example demonstrates:
Binarizing a numeric feature.
Indexing a categorical feature.
Assembling multiple features into a vector.
Feature transformation is the process of extracting meaningful features from raw data to enhance the accuracy of downstream statistical models. The Data Distiller feature engineering SQL extension provides a comprehensive suite of techniques that streamline and automate data preprocessing. These functions allow for seamless, efficient data preparation and enable easy experimentation with various feature engineering methods. Designed for distributed computing, the SQL extension supports feature engineering on large datasets in a parallel and scalable manner, significantly reducing the time needed for preprocessing.
Feature transformation is broadly used for the following purposes:
Extraction: Extracts important information from data columns, helping models to identify key signals. For example, in textual data, long sentences may contain irrelevant words that need to be removed to improve model performance.
Transformation: Converts raw data into a format that machine learning models can consume. Since models understand numbers but not text, transformers are used to convert non-numerical data into numerical features.
Define custom preprocessing steps using the TRANSFORM
clause.
If the TRANSFORM
clause is omitted, Data Distiller performs basic preprocessing.
Several transformers can be used for feature engineering:
Description: Fills missing numeric values using a specified strategy such as "mean," "median," or "mode."
Example:
Description: Replaces missing string values with a specified string.
Example:
Description: Completes missing values in a boolean column using a specified boolean value.
Example:
Description: Combines multiple columns into a single vector column. Useful for creating feature vectors from multiple features.
Example:
Description: Converts a numeric column to a binary value (0 or 1) based on a specified threshold.
Example:
Description: Splits a continuous numeric column into discrete bins based on specified thresholds.
Example:
Description: Converts a column of strings into a column of indexed numerical values, typically used for categorical features.
Example:
Description: Converts categorical features represented as indices into a one-hot encoded vector.
Example:
Description: Standardizes a numeric column by removing the mean and scaling to unit variance.
Example:
Description: Scales a numeric column to a specified range, typically [0, 1].
Example:
Description: Scales a numeric column by dividing each value by the maximum absolute value in that column.
Example:
Description: Normalizes a vector to have unit norm, typically used for scaling individual samples.
Example:
Description: Expands a vector of features into a polynomial feature space.
Example:
Description: Selects the top features based on the Chi-Square test of independence.
Example:
Description: Reduces the dimensionality of the data by projecting it onto a lower-dimensional subspace.
Example:
Description: Converts categorical features into numerical features using the hashing trick, resulting in a fixed-length feature vector.
Example:
Description: Removes common stop words from a column of text data.
Example:
Description: Converts a column of text data into a sequence of n-grams.
Example:
Description: Splits a string column into a list of words.
Example:
Description: TF-IDF is a statistic that reflects how important a word is to a document within a collection or corpus. It is widely used in text mining and natural language processing to transform text data into numerical features. Given a term ttt, a document ddd, and a corpus DDD:
Term Frequency (TF) measures the frequency of a term in a document is the number of times term ttt appears in document ddd.
Document Frequency (DF) counts how many documents contain the term is the number of documents in the corpus DDD that include the term ttt.
Using only term frequency can overemphasize terms that appear frequently but carry little meaningful information (e.g., "a," "the," "of"). TF-IDF addresses this by weighting terms inversely proportional to their frequency across the corpus, thus highlighting terms that are more informative for a particular document.
Example:
TF-IDF helps in converting a collection of text documents into a matrix of numerical features that can be used as input for machine learning models. It is particularly useful for feature extraction in text classification tasks, sentiment analysis, and information retrieval.
Description: Word2Vec
is an estimator that takes sequences of words representing documents and trains a Word2VecModel
. The model maps each word to a unique fixed-size vector in a continuous vector space. The Word2VecModel then transforms each document into a vector by averaging the vectors of all the words in the document. This technique is widely used in natural language processing (NLP) tasks to capture the semantic meaning of words and represent them in a numerical format suitable for machine learning models.
Example:
In this example:
The tokenizer
transformer splits the input text into individual words.
The word2vec
transformer generates a fixed-size vector (with a specified size of 10) for each word in the sequence and computes the average vector for all words in the document.
Word2Vec
is commonly used to convert text data into numerical features, allowing machine learning algorithms to process textual information while capturing semantic relationships between words.
Description: The CountVectorizer is used to convert a collection of text documents into vectors of token counts. It generates sparse representations for the documents based on the vocabulary, allowing further processing by algorithms such as Latent Dirichlet Allocation (LDA) and other text analysis techniques. The output is a sparse vector where the value of each element represents the count of a term in the document.
Input Data Type: array[string]
Output Data Type: Sparse vector
Parameters:
VOCAB_SIZE
: The maximum size of the vocabulary. The CountVectorizer will build a vocabulary that considers only the top vocabSize
terms, ordered by term frequency across the corpus.
MIN_DOC_FREQ
: Specifies the minimum number of different documents a term must appear in to be included in the vocabulary. If set as an integer, it indicates the number of documents; if a double in [0,1)
, it indicates a fraction of documents.
MAX_DOC_FREQ
: Specifies the maximum number of different documents a term could appear in to be included in the vocabulary. Terms appearing more than the threshold are ignored. If set as an integer, it indicates the maximum number of documents; if a double in [0,1)
, it indicates the maximum fraction of documents.
MIN_TERM_FREQ
: Filters out rare words in a document. Terms with a frequency lower than the threshold in a document are ignored. If an integer, it specifies the count; if a double in [0,1)
, it specifies a fraction.
Example:
Numeric Imputer
Fills missing numeric values using "mean," "median," or "mode."
TRANSFORM (numeric_imputer(age, 'median') as age_imputed)
String Imputer
Replaces missing string values with a specified string.
TRANSFORM (string_imputer(city, 'unknown') as city_imputed)
Boolean Imputer
Completes missing values in a boolean column using a specified boolean value.
TRANSFORM (boolean_imputer(has_account, true) as account_imputed)
Vector Assembler
Combines multiple columns into a single vector column.
TRANSFORM (vector_assembler(array(col1, col2)) as feature_vector)
Binarizer
Converts a numeric column to a binary value (0 or 1) based on a specified threshold.
TRANSFORM (binarizer(rating, 10.0) as binarized_rating)
Bucketizer
Splits a continuous numeric column into discrete bins based on specified thresholds.
TRANSFORM (bucketizer(age, array(18, 30, 50)) as age_group)
String Indexer
Converts a column of strings into indexed numerical values.
TRANSFORM (string_indexer(category) as indexed_category)
One-Hot Encoder
Converts categorical features represented as indices into a one-hot encoded vector.
TRANSFORM (one_hot_encoder(indexed_category) as encoded_category)
Standard Scaler
Standardizes a numeric column by removing the mean and scaling to unit variance.
TRANSFORM (standard_scaler(income) as scaled_income)
Min-Max Scaler
Scales a numeric column to a specified range, typically [0, 1].
TRANSFORM (min_max_scaler(income, 0, 1) as scaled_income)
Max-Abs Scaler
Scales a numeric column by dividing each value by the maximum absolute value in the column.
TRANSFORM (max_abs_scaler(weight) as scaled_weight)
Normalizer
Normalizes a vector to have unit norm.
TRANSFORM (normalizer(feature_vector) as normalized_features)
Polynomial Expansion
Expands a vector of features into a polynomial feature space.
TRANSFORM (polynomial_expansion(features, 2) as poly_features)
Chi-Square Selector
Selects top features based on the Chi-Square test of independence.
TRANSFORM (chi_square_selector(features, 3) as selected_features)
PCA (Principal Component Analysis)
Reduces data dimensionality by projecting onto a lower-dimensional subspace.
TRANSFORM (pca(features, 5) as pca_features)
Feature Hasher
Converts categorical features into numerical features using the hashing trick.
TRANSFORM (feature_hasher(array(col1, col2), 100) as hashed_features)
Stop Words Remover
Removes common stop words from a text data column.
TRANSFORM (stop_words_remover(text_column) as cleaned_text)
NGram
Converts text data into a sequence of n-grams.
TRANSFORM (ng
ram(words, 2) as bigrams)
Tokenization
Splits a string column into a list of words.
TRANSFORM (tokenizer(sentence) as words
)
TF-IDF
Converts a collection of text documents to a matrix of numerical features.
TRANSFORM (tf_idf(tokenized_text) as tfidf_features)
Word2Vec
Maps words to a vector space and averages vectors for each document.
TRANSFORM (word2vec(tokenized, 10, 1) as word2vec_features)
CountVectorizer
Converts text documents to vectors of token counts.
TRANSFORM (count_vectorizer(texts) as cv_output)
Set hyper-parameters using the OPTIONS
clause to optimize model performance.
Example:
Use vector assemblers to combine related features.
Perform feature scaling (e.g., normalization) where applicable.
Choose models based on the problem type (e.g., classification vs. regression).
The detailed list is here
Linear Regression
'linear_reg'
MAX_ITER
Maximum number of iterations for optimization.
100
>=0
REG_PARAM
Regularization parameter for controlling model complexity.
0
>=0
ELASTIC_NET_PARAM
Mixing parameter for ElasticNet regularization (L1 vs. L2 penalty).
0
[0, 1]
Decision Tree Regression 'decision_tree_regression'
MAX_BINS
Maximum number of bins for discretizing continuous features.
32
>=2
CACHE_NODE_IDS
Whether to cache node IDs for training deeper trees.
FALSE
true, false
CHECKPOINT_INTERVAL
How often to checkpoint cached node IDs during training.
10
>=1
IMPURITY
Criterion for information gain calculation ("variance" used for regression).
"variance"
"variance"
MAX_DEPTH
Maximum depth of the tree.
5
[0, 30]
Random Forest Regression
'random_forest_regression'
NUM_TREES
Number of trees in the forest.
20
>=1
MAX_DEPTH
Maximum depth of each tree in the forest.
5
>=0
SUBSAMPLING_RATE
Fraction of data used to train each tree.
1
(0, 1]
FEATURE_SUBSET_STRATEGY
Strategy for selecting features for each split.
"auto"
"auto", "all", "sqrt", "log2"
IMPURITY
Criterion for information gain calculation ("variance" used for regression).
"variance"
"variance"
Gradient Boosted Tree Regression
'gradient_boosted_tree_regression'
MAX_ITER
Maximum number of iterations (equivalent to the number of trees).
20
>=0
STEP_SIZE
Step size (learning rate) for scaling the contribution of each tree.
0.1
(0, 1]
LOSS_TYPE
Loss function to be minimized during training.
"squared"
"squared", "absolute"
Generalized Linear Regression 'generalized_linear_reg'
MAX_ITER
Maximum number of iterations for optimization.
25
>=0
REG_PARAM
Regularization parameter for controlling model complexity.
0
>=0
FAMILY
Family of distributions for the response variable (e.g., Gaussian, Poisson).
"gaussian"
"gaussian", "binomial", "poisson", "gamma", "tweedie"
Isotonic Regression 'isotonic_regression'
ISOTONIC
Whether the output sequence should be isotonic (increasing) or antitonic (decreasing).
TRUE
true, false
Survival Regression 'survival_regression'
MAX_ITER
Maximum number of iterations for optimization.
100
>=0
TOL
Convergence tolerance for optimization.
1.00E-06
>=0
Factorization Machines Regression 'factorization_machines_regression'
TOL
Convergence tolerance for optimization.
1.00E-06
>=0
FACTOR_SIZE
Dimensionality of the factors.
8
>=0
FIT_INTERCEPT
Whether to fit an intercept term.
TRUE
true, false
FIT_LINEAR
Whether to fit linear terms (1-way interactions).
TRUE
true, false
INIT_STD
Standard deviation of initial coefficients.
0.01
>=0
MAX_ITER
Number of iterations for the algorithm.
100
>=0
MINI_BATCH_FRACTION
Fraction of data used in each mini-batch.
1
(0, 1]
REG_PARAM
Regularization parameter.
0
>=0
SEED
Random seed for reproducibility.
Not set
Any 64-bit integer
SOLVER
Solver algorithm used for optimization.
"adamW"
"gd", "adamW"
STEP_SIZE
Initial step size for the first step.
1
>0
PREDICTION_COL
Name of the column for prediction output.
"prediction"
Any string
The detailed list is here
Logistic Regression 'logistic_reg'
MAX_ITER
Maximum number of iterations for optimization.
100
>=0
REG_PARAM
Regularization parameter for controlling model complexity.
0
>=0
ELASTIC_NET_PARAM
Mixing parameter for ElasticNet regularization (L1 vs. L2 penalty).
0
[0, 1]
FIT_INTERCEPT
Whether to fit an intercept term in the model.
TRUE
true, false
TOL
Convergence tolerance for optimization.
1.00E-06
>=0
PROBABILITY_COL
Column name for predicted class probabilities.
"probability"
Any column name
RAW_PREDICTION_COL
Column name for raw prediction output (confidence scores).
"rawPrediction"
Any column name
THRESHOLDS
Thresholds for binary or multiclass classification.
Not set
Array of doubles
Decision Tree Classifier 'decision_tree_classifier'
MAX_BINS
Maximum number of bins for discretizing continuous features.
32
>=2
CACHE_NODE_IDS
Whether to cache node IDs for training deeper trees.
FALSE
true, false
CHECKPOINT_INTERVAL
How often to checkpoint cached node IDs during training.
10
>=1
IMPURITY
Criterion for information gain calculation.
"gini"
"gini", "entropy"
MAX_DEPTH
Maximum depth of the tree.
5
[0, 30]
MIN_INFO_GAIN
Minimum information gain required for a split at a node.
0
>=0.0
MIN_INSTANCES_PER_NODE
Minimum number of instances required in each child after a split.
1
>=1
SEED
Random seed for reproducibility.
Not set
Any 64-bit integer
WEIGHT_COL
Column name for sample weights.
Not set
Any column name
Random Forest Classifier 'random_forest_classifier'
NUM_TREES
Number of trees in the forest.
20
>=1
MAX_BINS
Maximum number of bins for discretizing continuous features.
32
>=2
MAX_DEPTH
Maximum depth of each tree in the forest.
5
>=0
IMPURITY
Criterion for information gain calculation.
"gini"
"gini", "entropy"
SUBSAMPLING_RATE
Fraction of data used to train each tree.
1
(0, 1]
FEATURE_SUBSET_STRATEGY
Strategy for selecting features for each split.
"auto"
"auto", "all", "sqrt", "log2"
BOOTSTRAP
Whether to use bootstrap sampling when building trees.
TRUE
true, false
SEED
Random seed for reproducibility.
Not set
Any 64-bit integer
WEIGHT_COL
Column name for sample weights.
Not set
Any column name
PROBABILITY_COL
Column name for predicted class probabilities.
"probability"
Any column name
RAW_PREDICTION_COL
Column name for raw prediction output (confidence scores).
"rawPrediction"
Any column name
Naive Bayes Classifier 'naive_bayes_classifier'
MODEL_TYPE
Type of Naive Bayes model used (e.g., multinomial, bernoulli).
"multinomial"
"multinomial", "bernoulli", "gaussian"
SMOOTHING
Smoothing parameter to prevent zero probabilities.
1
>=0.0
PROBABILITY_COL
Column name for predicted class probabilities.
"probability"
Any column name
RAW_PREDICTION_COL
Column name for raw prediction output (confidence scores).
"rawPrediction"
Any column name
WEIGHT_COL
Column name for sample weights.
Not set
Any column name
Factorization Machines Classifier 'factorization_machines_classifier'
TOL
Convergence tolerance for optimization.
1.00E-06
>=0
FACTOR_SIZE
Dimensionality of the factors.
8
>=0
FIT_INTERCEPT
Whether to fit an intercept term.
TRUE
true, false
FIT_LINEAR
Whether to fit linear terms (1-way interactions).
TRUE
true, false
INIT_STD
Standard deviation of initial coefficients.
0.01
>=0
MAX_ITER
Number of iterations for the algorithm.
100
>=0
MINI_BATCH_FRACTION
Fraction of data used in each mini-batch.
1
(0, 1]
REG_PARAM
Regularization parameter.
0
>=0
SEED
Random seed for reproducibility.
Not set
Any 64-bit integer
SOLVER
Solver algorithm used for optimization.
"adamW"
"gd", "adamW"
STEP_SIZE
Initial step size for the first step.
1
>0
PROBABILITY_COL
Column name for predicted class conditional probabilities.
"probability"
Any column name
PREDICTION_COL
Name of the column for prediction output.
"prediction"
Any string
RAW_PREDICTION_COL
Column name for raw prediction (confidence scores).
"rawPrediction"
Any column name
ONE_VS_REST
Whether to enable one-vs-rest classification.
FALSE
true, false
Linear Support Vector Classifier 'linear_svc_classifier'
MAX_ITER
Number of iterations for optimization.
100
>=0
AGGREGATION_DEPTH
Suggested depth for tree aggregation.
2
>=2
FIT_INTERCEPT
Whether to fit an intercept term.
TRUE
true, false
TOL
Convergence tolerance for optimization.
1.00E-06
>=0
MAX_BLOCK_SIZE_IN_MB
Maximum memory in MB for stacking input data into blocks.
0
>=0
REG_PARAM
Regularization parameter.
0
>=0
STANDARDIZATION
Whether to standardize the training features.
TRUE
true, false
PREDICTION_COL
Name of the column for prediction output.
"prediction"
Any string
RAW_PREDICTION_COL
Column name for raw prediction (confidence scores).
"rawPrediction"
Any column name
ONE_VS_REST
Whether to enable one-vs-rest classification.
FALSE
true, false
Multilayer Perceptron Classifier 'multilayer_perceptron_classifier'
MAX_ITER
Number of iterations for the algorithm.
100
>=0
BLOCK_SIZE
Block size for stacking input data in matrices.
128
>=1
STEP_SIZE
Step size for each iteration of optimization.
0.03
>0
TOL
Convergence tolerance for optimization.
1.00E-06
>=0
PREDICTION_COL
Name of the column for prediction output.
"prediction"
Any string
SEED
Random seed for reproducibility.
Not set
Any 64-bit integer
PROBABILITY_COL
Column name for predicted class conditional probabilities.
"probability"
Any column name
RAW_PREDICTION_COL
Column name for raw prediction (confidence scores).
"rawPrediction"
Any column name
ONE_VS_REST
Whether to enable one-vs-rest classification.
FALSE
true, false
Gradient Boosted Tree Classifier 'gradient_boosted_tree_classifier'
MAX_BINS
Maximum number of bins used for discretizing continuous features and choosing how to split on features at each node. More bins give higher granularity.
32
Must be >= 2 and >= number of categories in any categorical feature
CACHE_NODE_IDS
If false, the algorithm passes trees to executors to match instances with nodes. If true, node IDs for each instance are cached to speed up training of deeper trees.
false
true, false
CHECKPOINT_INTERVAL
Specifies how often to checkpoint the cached node IDs (e.g., 10 means checkpoint every 10 iterations). This is used only if cacheNodeIds
is true and the checkpoint directory is set in SparkContext.
10
>= 1
MAX_DEPTH
Maximum depth of the tree. For example, depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes.
5
>=0
The detailed list is here
K-Means
'kmeans'
MAX_ITER
Maximum number of iterations for the clustering algorithm.
20
>=0
TOL
Convergence tolerance for the iterative algorithm.
0.0001
>=0
NUM_CLUSTERS
Number of clusters to form.
2
>1
DISTANCE_TYPE
Distance measure used for clustering.
"euclidean"
"euclidean", "cosine"
KMEANS_INIT_METHOD
Initialization algorithm for cluster centers.
`"k-means
INIT_STEPS
Number of steps for the k-means
initialization mode.
PREDICTION_COL
Column name for the predicted cluster.
"prediction"
Any column name
SEED
Random seed for reproducibility.
Not set
Any 64-bit integer
WEIGHT_COL
Column name for sample weights.
Not set
Any column name
Bisecting K-Means
'bisecting_kmeans'
MAX_ITER
Maximum number of iterations for the clustering algorithm.
20
>=0
NUM_CLUSTERS
Number of leaf clusters to form.
4
>1
DISTANCE_MEASURE
Distance measure used for clustering.
"euclidean"
"euclidean", "cosine"
MIN_DIVISIBLE_CLUSTER_SIZE
Minimum number of points for a divisible cluster.
1
>0
PREDICTION_COL
Column name for the predicted cluster.
"prediction"
Any column name
SEED
Random seed for reproducibility.
Not set
Any 64-bit integer
WEIGHT_COL
Column name for sample weights.
Not set
Any column name
Gaussian Mixture 'gaussian_mixture'
MAX_ITER
Maximum number of iterations for the EM algorithm.
100
>=0
NUM_CLUSTERS
Number of Gaussian distributions in the mixture model.
2
>1
TOL
Convergence tolerance for iterative algorithms.
0.01
>=0
AGGREGATION_DEPTH
Depth for tree aggregation during the EM algorithm.
2
>=2
PROBABILITY_COL
Column name for predicted class conditional probabilities.
"probability"
Any column name
PREDICTION_COL
Column name for the predicted cluster.
"prediction"
Any column name
SEED
Random seed for reproducibility.
Not set
Any 64-bit integer
WEIGHT_COL
Column name for sample weights.
Not set
Any column name
Latent Dirichlet Allocation (LDA)
'lda'
MAX_ITER
Maximum number of iterations for the algorithm.
20
>=0
OPTIMIZER
Optimizer used to estimate the LDA model.
"online"
"online", "em"
NUM_CLUSTERS
Number of topics to identify.
10
>1
DOC_CONCENTRATION
Concentration parameter for the prior placed on documents' distributions over topics.
Automatic
>0
TOPIC_CONCENTRATION
Concentration parameter for the prior placed on topics' distributions over terms.
Automatic
>0
LEARNING_DECAY
Learning rate for the online optimizer.
0.51
(0.5, 1.0]
LEARNING_OFFSET
Learning parameter that downweights early iterations for the online optimizer.
1024
>0
SUBSAMPLING_RATE
Fraction of the corpus used for each iteration of mini-batch gradient descent.
0.05
(0, 1]
OPTIMIZE_DOC_CONCENTRATION
Whether to optimize the doc concentration during training.
FALSE
true, false
CHECKPOINT_INTERVAL
Frequency of checkpointing the cached node IDs.
10
>=1
SEED
Random seed for reproducibility.
Not set
Any 64-bit integer
TOPIC_DISTRIBUTION_COL
Output column with estimates of the topic mixture distribution for each document.
Not set
Any column name