No-Kno Platform

Platform overview


The No-Kno platform consists of three components: Connectors to your ad accounts and owned media. An asset analysis pipeline that continuously scans your creative assets with state-of-the-art image analysis. A dashboard that gives you insights into the diversity and inclusiveness of your campaigns.

Chart of 3 components of the Nokno platform: connectors, asset analysis and Nokno dashboard

Example Insights


With the No-Kno dashboard you can find out how different demographics are represented in your ads, along with potential negative stereotypes. Below are some examples of insights retrieved from the No-Kno platform.

Chart of female versus male drivers behind the wheel in car commercials as analysed by the nokno platform.

Industry insights

An analysis of top-selling car brands, shows gender stereotypes in the automotive industry: 87% of car commercials had men as driver, while only 47% had female drivers.

Chart of the ethnic distribution of talents over time as analysed by the nokno platform.

Brand insights

According to an analysis of ethnicity over time, a brand initially included more black talent in response to the BLM movement, but then reverted back to its old patterns in the years after the peak.

A chart from the nokno dashboard, showing the objectives for age representation versus the actuals.

Setting objectives

It is possible for brands to set different objectives for different demographics they want to see represented in their ads, including age, gender, and ethnicity. Dashboards allow for continuous monitoring of metrics against objectives.

4 charts comparing the age distribution of talents engaged by 4 different brands. Each chart shows the nokno diversity score.

Competitor insights

An analysis of 4 competing brands shows the age distribution of the talents in their videos.
The No-Kno Diversity Score is a single metric that allows for comparison across brands, markets and over time.

Platform FAQ


What demographic and behavioral patterns do you measure?

Our measurements of demographics are based on the visual appearance of the people depicted in your advertisements. Make-up, for instance, may make some talents appear younger or older than they actually are. This is not a problem. We are not interested in finding out the exact demographic of the talents in your ads, but rather which demographic they represent.
As an example, if a talent is 37 years old in reality, but looks 27 thanks to make-up, they will represent the age group 25-34 to your viewers, and not 35-44.

Age: The image analysis model will return an age range, we take the mean of the age range as the visual age.

Gender: We realise that gender is not a simple binary attribute that can be determined by visual appearance alone. However, our model looks at facial features to classify a person in a binary way, as male or female.

Ethnicity: A person's dominant and secondary races/ethnicities are inferred by our model, along with their probability of belonging to each.

Stereotypes: Often, negative stereotypes are specific to an industry or a product. "Which gender is behind the wheel and who is the passenger in a car commercial?" can reflect certain gender stereotypes specific to the automotive industry.
As our platform develops, we will be able to measure more behavioral patterns. In addition, we work with our clients to determine which behavioral patterns and potential stereotypes they want to measure.

Other intersectional aspects: Machine learning cannot infer (mostly) non-visible characteristics such as religion or sexual orientation, nor can the viewers of your ads.

What are the limitations of your image analysis?

The purpose of No-Kno is to analyze advertisements at scale. As with any quantitative method, it has its limitations. Algorithms used to analyze images detect visual features and do not take into account cultural or personal context. You should consider No-Kno an essential part of your quest for more inclusive communication, alongside qualitative research, strategic dialogue, and an inclusive organization.

How good is your image analysis and do you keep a human in the loop?

When the input image is sufficiently large, has good lighting, is sharp, etc., our model is at least as accurate as a human reviewer.

We always keep a human in the loop, however. An image will be flagged if it is too small, blurry, or if the model returns low confidence in a prediction. An editor may then decide to keep or edit the results.

How do you avoid bias in your model?

To avoid bias in race/ethnicity inference, we use a model based on the Fairface dataset: a face image dataset containing 108,501 images which is balanced on 7 race groups: White, Black, South Asian, East Asian, Southeast Asian, Middle Eastern, and Latine/Hispanic.

Get in touch