Platform FAQ

What demographic and behavioral patterns do you measure?

Our measurements of demographics are based on the visual appearance of the people depicted in your advertisements. Make-up, for instance, may make some talents appear younger or older than they actually are. This is not a problem. We are not interested in finding out the exact demographic of the talents in your ads, but rather which demographic they represent.
As an example, if a talent is 37 years old in reality, but looks 27 thanks to make-up, they will represent the age group 25-34 to your viewers, and not 35-44.

Age: The image analysis model will return an age range, we take the mean of the age range as the visual age.

Gender: We realise that gender is not a simple binary attribute that can be determined by visual appearance alone. However, our model looks at facial features to classify a person in a binary way, as male or female.

Ethnicity: A person's dominant and secondary races/ethnicities are inferred by our model, along with their probability of belonging to each.

Stereotypes: Often, negative stereotypes are specific to an industry or a product. "Which gender is behind the wheel and who is the passenger in a car commercial?" can reflect certain gender stereotypes specific to the automotive industry.
As our platform develops, we will be able to measure more behavioral patterns. In addition, we work with our clients to determine which behavioral patterns and potential stereotypes they want to measure.

Other intersectional aspects: Machine learning cannot infer (mostly) non-visible characteristics such as religion or sexual orientation, nor can the viewers of your ads.

What are the limitations of your image analysis?

The purpose of No-Kno is to analyze advertisements at scale. As with any quantitative method, it has its limitations. Algorithms used to analyze images detect visual features and do not take into account cultural or personal context. You should consider No-Kno an essential part of your quest for more inclusive communication, alongside qualitative research, strategic dialogue, and an inclusive organization.

How good is your image analysis and do you keep a human in the loop?

When the input image is sufficiently large, has good lighting, is sharp, etc., our model is at least as accurate as a human reviewer.

We always keep a human in the loop, however. An image will be flagged if it is too small, blurry, or if the model returns low confidence in a prediction. An editor may then decide to keep or edit the results.

How do you avoid bias in your model?

To avoid bias in race/ethnicity inference, we use a model based on the Fairface dataset: a face image dataset containing 108,501 images which is balanced on 7 race groups: White, Black, South Asian, East Asian, Southeast Asian, Middle Eastern, and Latine/Hispanic.