DDA 300: Audience Overlaps with Data Distiller
Learn how to leverage snapshot of profile attributes, identities and segment memberships to build exotic queries such as 3 or 4 segment overlaps
Last updated
Learn how to leverage snapshot of profile attributes, identities and segment memberships to build exotic queries such as 3 or 4 segment overlaps
Last updated
Every day, a snapshot of the Profile attributes for every merge policy is exported to the data lake. These system datasets are hidden by default but are accessible by toggling these datasets in the data catalog.
These datasets contain information about the profile attributes, the identity map, and the segment membership as reflected at the time of creating the snapshot. The examples below show how you can explore this dataset to understand identity composition and even create exotic segment overlaps.
To first access, the Profile Snapshot datasets, navigate to Datasets and click on the filter to show system datasets. Turning this filter on will show system datasets along with others. You need to search for "profile attribute" to filter down the list. Profile attribute snapshot datasets are exported for every merge policy in the system. In the example below, there are 3 merge policies and hence there are 3 datasets.
These datasets will typically look like the following unless their name has been changed: profile_attribute_cd03f195_66b5_4a62_a6f9_c606e951c903
Even if the name has been changed, you should be able to query the dataset:
The columns you will see in the result will look like the following. If you see identityMap and segmentMembership fields, then you are on the right track.
There are some other interesting datasets that are generated on a daily basis by the system. Explore some of these on your own.
This will give you the list of destinations that have been mapped to a segment. Note that each row contains one such mapping and that a single destination can have multiple segments and vice versa. Also, note that segmentnamespace refers to the source of the segment i.e. whether it was created in AEP or elsewhere.
You may also see a dataset that contains the segment information. Search for a dataset that has segment definition in the schema type and you should be able to locate this dataset
The results of the query look like the following:
And finally the destinations ID to account name mapping is available in this system dataset:
Let us explore another dataset that gives us information about what identities are available for mapping to a destination.
The result is:
The result shows that for a given target identity (namespace) available in the destination, there are multiple source identity options, many of which are not used (isMapped = false). These identities were sourced by the profiles that were part of the audience which in turn was powered by the identity graph.
Tip: Note that if an identity is missing at the source which is mapped to a destination field, that identity is dropped from being sent. There are other identities of the same profile that may have non-zero values and they will get sent to the destination. Activation to a destination is essentially a process of identity disaggregation.
Warning: The Profile attribute export dataset does not contain profiles that have more than 50 identities. In order to avoid losing these profiles, you need to contact Adobe to configure the identity graph so that it retains the most recent identities belonging to a cookie namespace.
Just for fun, let us see what fields are being used in our destinations in my test environment:
The results are:
The following tables are created from the Profile Attribute Snapshot in the reporting star schema for the Real-Time Customer Profiles. We will write queries against these tables to get answers that will be
Note that there is no table name to merge policy mapping in this star schema. You need to run the first query to determine the merge policy and identify the dataset_id. Use that to search the dataset name in Univeral Search at the top and not local dataset search.
You can get the dataset_id to the dataset name by typing the following and browsing down the list of results:
Another approach would be to copy and paste that into the search bar at the top in the AEP UI.
The dataset name will reveal itself. Click on the dataset to get the table name which you will need as the table name for all your queries.
You should be able to start exploring the profile snapshot datasets immediately.
Merge policy is an "MDM-like" feature that lets you prioritize attribute records whenever there is a conflict i.e. either use one dataset over the other or use the timestamp to resolve that conflict. Mostly, you will see that timestamp precedence is used as the source of truth is typically a CRM system that is evolving over time, and new attribute records with updates get added to the Real-Time Customer Profile. The new records need to take precedence.
If you navigate to the Profiles->Overview page, for the same merge policy, you will see the same count. Note that we had chosen Default Time-based merge policy in this example in our environment.
This splitting into rows destroys Identity map association.
Before you extract the identity information, you need to understand the map structure. Use the to_json feature in Data Distiller to get in place information about the schema structure without having to go to the Schemas UI screen. This is an extremely important feature with deeply nested data structures and maps.
If you execute the following, you will see the following:
What you see above is the identity map structure. The map has the identity namespace (email, crmid) as the unique key to index the values. Note that this is an array of values. If you need to extract the identities, you will need to use the following code:
The results look like the following:
Typically we would explode this array
But there is a problem - plain vanilla "explode" will get rid of rows that do not have any identity values. That is a problem because the absence of an identity is a signal and is possibly associated with other identities that the person may have. Let us fix the issue by using explode_outer from Data Distiller:
But just doing an explode_outer on a single identity namespace is of no use. We have destroyed the association between identities within that same namespace. Let us generate a synthetic UUID for each profile row to keep track of our identities as we explode them. If we do this right, we will generate a table of UUIDs and related identities as a columnar table that gives us leverage to use it for a variety of operations.
If we take the first element of the email namespace and concatenate that with the first element of the crmid namespace, we are guaranteed a unique identifier. If not, it would mean that two rows in the profile export dataset have that identity in common. The identity stitching in Real-Time Customer Profile should have taken care of it and merged it into a single row.
Let us now generate the unique UUIDs along with the schema-friendly identity map structure
The results will look like:
But concat string function does a poor job at concatenating NULL values. We have to remedy that with COALESCE:
This works and you will get:
If there are two identity namespaces, then the explode_outer operator works on each, one at a time. Make sure you remove the to_json as we do not need it anymore:
You will get:
We need to explode this one more time for crmid and we should get:
The results would be:
Using this table, I can do some very interesting analysis. For example, we can look at the histogram of email IDs in the system:
The answer looks like this:
Why are we seeing 976 as a bucket and 0 counts for unique_ids? The sum of 8225 and 976 adds up to the number of profiles. Why did we even get this result in the first place? Is there something wrong with the way the SQL is written or is there something more profound happening in the identity graph?
Still cannot figure it out? What if the email and crmid identities were non-existent? You have null rows in the table that you should clean before you do any analytics. For the identity lookup use case which is the goal of this section, it would not matter,
In our example, we will be simplifying the use case since we do not have multiple identities. We will be just using the first element to create a simple map.
The segment_id needs to be resolved to a segment name. Note that I have to name the subqueries in order to reference them in the INNER JOIN clause. I am also creating a unique UID by concatenating the ID values
Materialize this table to the data lake. Use the DROP TABLE clause to wipe out an existing table
Explore what is contained in the dataset. We should see 13K rows
Identify records that have NULL crmid values
Here we will count the number of people vs what I see in the UI. There are some segments that are zero and that do not agree with UI. What do you think happened?
Do not materialize the dataset. Instead, let us fix this with COALESCE
We still do not materialize because we need to check the count. It should be 8,225 in our case.
The above query result does not agree with what we see below. Where is the difference?
Exploding segmentMembership eliminated unsegmented profiles.
If you add 6500 (profiles attached to at least one segment)+1725 (profiles not attached to any segment), you get 8225 which is our count.
Let us create some analytical queries
Double-check:
Even before we typed this query, we should have been able to predict the kinds of answers we would have gotten. The lowest-sized audience will give you the upper limit on the overlap. Can you figure out which of these segments is that?
However, if we execute the query, we see the not-so-surprising result.