Skip to main content
All CollectionsGenerating images
Can I avoid unwanted bias in my generated images?
Can I avoid unwanted bias in my generated images?
Christopher John avatar
Written by Christopher John
Updated over a year ago

Yes - Pencil Pro has automated filters in place to provide an initial check on whether a given AI output is appropriate. These filters can be used to screen for potentially inappropriate content, including profanity, racism, sexism, etc. We also make use of filters provided by the underlying models we use e.g. GPT4.

In addition, you can use The Brandtech Group's Bias Breaker, a proprietary tool designed to be a step in the right direction for overcoming inherent bias in large language models at the point of inference.

Bias Breaker uses roll of dice probability to add a layer of probability-backed inclusivity to your prompts. We configured six ‘dice’ to the most common elements of diversity - age, race, disability, gender etc. so that when a user enters a simple prompt - like ‘a CEO’ - the tool ‘rolls those dice’ and adds between zero, one and two types of inclusivity to give you back a more sophisticated prompt to use in any image generation model.

In other words, inclusive prompts are created that are more nuanced and more configurable than all model system prompts driven by the foundation model providers.

It is also worth noting that with Pencil Pro there is also always a human-in-the-loop throughout the process and there is absolutely no publishing of content without human approval. Pencil is designed to augment the effort of human teams and so there is always a person prompting, curating, editing and exporting the content Pencil helps you to make.

You can learn more about using the Bias Breaker tool here

Did this answer your question?