IDR 301: Using Levenshtein Distance for Fuzzy Matching in Identity Resolution with Data Distiller
Learn how to apply fuzzy matching with Data Distiller to improve accuracy in identity resolution and profile management.
Last updated
Learn how to apply fuzzy matching with Data Distiller to improve accuracy in identity resolution and profile management.
Last updated
For this tutorial, you will need to ingest the following dataset:
using the steps outlined in:
We will also be using DBVisualizer to extract large volumes of data directly onto our machine from the Data Distiller backend:
Fuzzy matching is a technique used to identify and link records that are similar but not exact matches, such as CRM_IDs or name fields with slight variations. By applying fuzzy matching, you can address discrepancies arising from data entry errors or inconsistent formatting, which are common causes of profile collapse in customer databases. When records are mistakenly treated as separate due to small variations, fuzzy matching can help consolidate them, leading to more accurate identity resolution.
Fuzzy matching is particularly valuable when dealing with data entry errors or subtle differences in identifier formats that cause profile fragmentation or collapse. For instance, CRM_IDs that differ by one or two characters may represent the same individual, but these small discrepancies prevent a system from recognizing them as such. By applying fuzzy matching, you can detect and merge these near-duplicates, improving the quality and continuity of the customer profiles.
The implementation involves preprocessing the data with fuzzy matching techniques using Data Distiller. This initial processing phase standardizes and identifies close matches within CRM_IDs or name fields. The cleaned and standardized values are then applied in SQL to create a consolidated dataset, resulting in more accurate profile records and reducing instances of incorrect merges or data fragmentation. This multi-step approach combines fuzzy matching capabilities with SQL’s power to efficiently handle large datasets, enhancing overall data accuracy and reliability.
The Levenshtein distance is a way of measuring how different two words (or strings) are from each other. Imagine you have two words, and you want to see how many small changes it would take to turn one word into the other.
Each change could be:
Inserting a new letter (e.g., turning "cat" into "cart" by adding "r").
Deleting a letter (e.g., turning "cart" into "cat" by removing "r").
Replacing one letter with another (e.g., turning "cat" into "bat" by changing "c" to "b").
The Levenshtein distance counts the minimum number of these small changes needed to make the two words exactly the same. So, if two words are very similar, like "cat" and "bat," the distance is small (only one change). If they're quite different, like "cat" and "elephant," the distance is much larger because you'd need many changes.
In essence, Levenshtein distance gives us a simple way to measure how "close" or "far" two words are from each other based on the number of changes required. It's often used in spell-checkers or in finding similar records in databases to help match up entries that might be slightly different due to typos or inconsistencies.
CRM_ID
using LEVENSHTEIN
distanceIn the SQL example below, fuzzy matching is applied to identify records with CRM_ID
s that are nearly identical, even if they differ slightly due to minor data entry errors or formatting inconsistencies. The query leverages the Levenshtein distance function to calculate the edit distance between pairs of CRM_ID
s, which measures the minimum number of single-character edits (insertions, deletions, or substitutions) needed to make one CRM_ID
identical to another. By setting a threshold of 2, the query identifies CRM_ID
pairs that have minor variations—indicating that they may refer to the same entity but were inconsistently recorded. This approach is particularly useful in cases where exact matches fail to capture all duplicates due to slight discrepancies. By storing these potential matches in a temporary table, fuzzy_matches
, the process allows for a detailed review or automated cleanup to merge or consolidate profiles, ultimately improving the accuracy and integrity of the dataset.
In the query below, the goal is to identify the best match for each ECID
based on the highest similarity score between CRM_ID
s, using our fuzzy matching algorithm. The query operates on the principle of minimizing the Levenshtein distance (called edit distance) between CRM_ID
pairs within each ECID
group. By finding the smallest possible crm_id_similarity_score
, we capture the closest match—meaning the pair with the least number of character edits needed to make the CRM_ID
s identical.
The subquery (SELECT MIN(crm_id_similarity_score)...)
determines this closest match by selecting the smallest crm_id_similarity_score
for each ECID1
, representing the record with the highest similarity. The primary query then filters fuzzy_matches
to include only those pairs whose similarity score is equal to this minimum value, effectively creating best_matches
. This temporary table stores each ECID
and its closest matching record, allowing for precise consolidation based on the closest possible CRM_ID
values. By focusing on the minimum edit distance, the query ensures that only the best match is selected for each ECID
, thus refining identity resolution and reducing the chance of incorrect profile merges.
To create the cleaned dataset, use the following:
In the query above, the goal is to create a cleaned dataset where each ECID
is associated with a preferred CRM_ID
, selected based on the closest match identified in the best_matches
table.
The query works on the principle of data standardization and preference selection—it uses fuzzy matching results to replace potentially inconsistent or duplicate CRM_ID
s with the most representative one for each ECID
. Here’s how it achieves this:
COALESCE Selection: The query applies the COALESCE(b.CRM_ID1, a.CRM_ID)
function, which takes the CRM_ID
from best_matches
(if available) as the preferred identifier. COALESCE
ensures that if there is no match in best_matches
, the original CRM_ID
from example_dataset
(a.CRM_ID
) is retained. This means that for each ECID
, the system first looks for a refined CRM_ID
and defaults to the original one if no match exists.
LEFT JOIN: By performing a LEFT JOIN
between example_dataset
(a
) and best_matches
(b
) on ECID
, the query ensures that all records in example_dataset
are preserved. Only records with a corresponding ECID
in best_matches
will have the CRM_ID
replaced, making the cleaned dataset comprehensive while preserving unmatched entries.
Resulting Cleaned Dataset: The cleaned_dataset
now contains records where each ECID
is linked to the best possible CRM_ID
, improving data consistency by standardizing identifiers based on the closest match.
The results are:
To remove duplicate records in the cleaned_dataset
based on ECID
while retaining only one record per ECID
, you can use a ROW_NUMBER()
function to rank the records within each ECID
group, then select only the top-ranked record. This method ensures that duplicates are filtered out, leaving only one preferred CRM_ID
for each ECID
.
You can preview the results especially for this small dataset:
The reduced row count in the result set (9458 instead of the original 10,000) indicates that some records were filtered out as duplicates when we applied the ROW_NUMBER()
function. This reduction occurred because multiple records with the same ECID
were consolidated, keeping only the top-ranked (most recent) record for each ECID
. In other words, many ECID
s in the original example_dataset
had multiple records with different CRM_ID
s or duplicate entries based on Login_Timestamp
. When applying ROW_NUMBER()
and keeping only rn = 1
, we retained only one unique record per ECID
based on the most recent Login_Timestamp
. Therefore, if there were originally 10,000 rows but many duplicate ECID
s, filtering down to the top-ranked rn = 1
record for each unique ECID
results in only 9458 unique ECID
s in cleaned_dataset
.
This shows that there are no duplicates: