The No-Kno platform consists of three components: Connectors to your ad accounts and owned media. An asset analysis pipeline that continuously scans your creative assets with state-of-the-art image analysis. A dashboard that gives you insights into the diversity and inclusiveness of your campaigns.
With the No-Kno dashboard you can find out how different demographics are represented in your ads, along with potential negative stereotypes. Below are some examples of insights retrieved from the No-Kno platform.
An analysis of top-selling car brands, shows gender stereotypes in the automotive industry: 87% of car commercials had men as driver, while only 47% had female drivers.
According to an analysis of ethnicity over time, a brand initially included more black talent in response to the BLM movement, but then reverted back to its old patterns in the years after the peak.
It is possible for brands to set different objectives for different demographics they want to see represented in their ads, including age, gender, and ethnicity. Dashboards allow for continuous monitoring of metrics against objectives.
An analysis of 4 competing brands shows the age distribution of the talents in their videos.
The No-Kno Diversity Score is a single metric that allows for comparison across brands, markets and over time.
Our measurements of demographics are based on the visual appearance of the people depicted in your advertisements. Make-up, for instance, may make some talents appear younger or older than they actually are. This is not a problem. We are not interested in finding out the exact demographic of the talents in your ads, but rather which demographic they represent.
As an example, if a talent is 37 years old in reality, but looks 27 thanks to make-up, they will represent the age group 25-34 to your viewers, and not 35-44.
Age: The image analysis model will return an age range, we take the mean of the age range as the visual age.
Gender: We realise that gender is not a simple binary attribute that can be determined by visual appearance alone. However, our model looks at facial features to classify a person in a binary way, as male or female.
Ethnicity: A person's dominant and secondary races/ethnicities are inferred by our model, along with their probability of belonging to each.
Stereotypes: Often, negative stereotypes are specific to an industry or a product. "Which gender is behind the wheel and who is the passenger in a car commercial?" can reflect certain gender stereotypes specific to the automotive industry.
As our platform develops, we will be able to measure more behavioral patterns. In addition, we work with our clients to determine which behavioral patterns and potential stereotypes they want to measure.
Other intersectional aspects: Machine learning cannot infer (mostly) non-visible characteristics such as religion or sexual orientation, nor can the viewers of your ads.
The purpose of No-Kno is to analyze advertisements at scale. As with any quantitative method, it has its limitations. Algorithms used to analyze images detect visual features and do not take into account cultural or personal context. You should consider No-Kno an essential part of your quest for more inclusive communication, alongside qualitative research, strategic dialogue, and an inclusive organization.
When the input image is sufficiently large, has good lighting, is sharp, etc., our model is at least as accurate as a human reviewer.
We always keep a human in the loop, however. An image will be flagged if it is too small, blurry, or if the model returns low confidence in a prediction. An editor may then decide to keep or edit the results.
To avoid bias in race/ethnicity inference, we use a model based on the Fairface dataset: a face image dataset containing 108,501 images which is balanced on 7 race groups: White, Black, South Asian, East Asian, Southeast Asian, Middle Eastern, and Latine/Hispanic.