When we talk about Generative Artificial Intelligence (AI), there’s often controversy or even fear that AI might turn rogue or make decisions that negatively impact us as humans. But instead of fear, we should recognize two key points:
- We’ve seen a lot of movies and AI is not currently as Sci-Fi depicts it.
- AI indeed has a bias problem.
As the saying goes, to solve a problem, you first have to acknowledge it.
What is Bias?
Bias is a prejudice. If we look it up on Google, the Oxford Dictionary defines it as:
(Noun)
- prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair.
“there was evidence of bias against foreign applicants” - [statistical] a systematic distortion of a statistical result due to a factor not allowed for in its derivation.
The second connotation is more technical, related to the distortion of a statistical result.
Recognizing bias is crucial in developing fair and effective AI systems. By understanding and addressing the inherent biases in AI, we can create more balanced and equitable technological solutions.
Researchers from the National Institute of Standards and Technology (NIST) have identified three levels of biases: Human, Systemic and Statistical/Computational.
Human-cognitive bias.
Human bias refers to bias that is rooted in the beliefs, attitudes, and prejudices of individuals.
It occurs when people have incomplete information and use simple heuristics to help fill in the missing information.
An example of human bias is when someone dislikes spicy food. Perhaps they’ve never had a bad experience with spicy food, nor are they closely connected to cultures that use this ingredient in their diet, but they avoid these types of foods at all costs and hold a prejudice against them even before trying them. This person might think, “Hey, the sauce for tacos must be horrible.”
Cognitive bias is present among individuals and groups and is fundamental to the human mind.
Systemic bias
Systemic bias, often referred to as institutional or historical bias, occurs within systems, institutions, or organizations. The policies, practices, or structures of a system can lead to biases against one or more communities, even if the individuals within the system are not intentionally biased. Examples of this bias include racism, sexism, or homophobia.
Statistical / Computational bias
Statistical or computational bias refers to when the sample data does not represent the population. Artificial intelligence systems produce biased results or decisions due to biased data, biased training, or inherent algorithmic design flaws. Computational bias often reveals a systemic bias.
How Does Bias Impact AI?
Diversity is vast, and bias sometimes prevents us from seeing this diversity reflected in systems for various reasons.
For example, a well-known Generative AI, OpenAI’s Sora, when asked to generate images of “queer people,” would only add purple or pink streaks of hair.
Another example is a voice recognition AI failing to understand certain racialized accents or dialects.
Or a biased AI denying 80% of mortgage applications from Black applicants.
These are brief examples of how biases in AI can result in not only poor user experiences but also unfair discrimination and stigmatization of minority communities. So, how can we address and deal with these biases in AI products from the design stage?
Addressing Human Bias
Because people are not used to interacting with AI, it often feels like a black box—we don’t know how it works. When we interact with AI, we sometimes do so with fear or caution because we have no idea what results it will give us (and often, people have high expectations for the answers). Therefore, to make AI feel more human, we can work on the following points:
-
Transparency
Clearly explain how the AI works, what data it uses, and how it reaches its conclusions. This helps demystify the technology and build trust. Aim for exposing aspects that impact user trust and decision-making. Avoid trying to explain the entire system, especially when the rationale is complex or unknown
-
Diverse and Inclusive User Research
For instance, at Wizeline, we prioritize understanding the various needs, backgrounds, and contexts of our client’s users, ensuring our designs meet their needs and don’t favor one group over another. We conduct comprehensive user research, incorporating feedback from a wide range of demographics to inform our design choices.
-
Collaborative Design
Include users, especially those from underrepresented groups, in the design process. This will help prevent the exclusion of certain perspectives and promote a more equitable design decision. As our clients know Wizeline actively engages with diverse user groups throughout the design phase to ensure inclusivity and fairness.
Dealing with Bias in AI
Datasets have world-views. Machine learning models are inherently biased as they’re being fed data that reflects bias in our society – unless it has been cleaned up.
At Wizeline, inclusive design begins with recognizing the possibility of bias and abuse in AI solutions. We ensure everyone on our team is aware of these vulnerabilities so they can identify problems even during product development and testing. By doing so, we can find workarounds and possibly even solve these issues.
For instance, when developing a product for a client, we foster the use of diverse datasets and regularly test for biased outcomes to ensure our AI system produces fair and accurate results.
Midjourney outcome when prompted “a scientist”, first image was an approach in their early stage open to public, in the second image is a recent approach in mid 2024. In the second one, at least there is one image with a person with different genre, skin and hair color.
Incorporating Safety Features in the Interface
Let users report instances of errors, bias, and abuse within your design. If your system produces a purely AI-generated outcome, allow users to configure different parameters and regenerate results.
At Wizeline, we integrate feedback mechanisms in our AI products to allow users to report biases. This helps us continuously improve our systems and address any issues promptly.
Provide a way forward
Providing access to a person can be one way to make sure users’ concerns and problems are directly addressed.
Make changes to product
Sometimes the user’s error can’t be directly remedied but actions can be taken to make sure other users don’t encounter the same problem.
Involve the community
See if it is possible to have a dedicated team that reviews or moderates content. In cases of large systems, you might incentivize your users to be your first line of defense.
Use AI to Counter AI
Companies have developed algorithms to help detect biases in AI. Below are some examples. Each of these tools approaches the question of ethics from a different perspective:
- Google’s What-If Tool: This tool questions what fairness means and allows product developers to sort the data according to different types of fairness, allowing humans to see the trade-offs in different ways to measure fairness and make decisions accordingly.
- Fairlearn: An open-source toolkit from Microsoft for data scientists and developers to assess and improve the fairness of their AI systems.
At Wizeline, we utilize these tools and methodologies to ensure our AI systems are as fair and unbiased as possible. We continuously strive to improve our processes and incorporate the latest tools and frameworks in AI ethics and fairness.
Conclusion
Addressing bias in AI is a multifaceted challenge that requires ongoing effort and vigilance. By recognizing and understanding the different types of biases—human, systemic, and statistical/computational—we can develop more equitable and effective AI systems. Wizeline is committed to practicing what we preach by integrating transparency, diverse user research, and collaborative design into our workflows. By doing so, we aim to create AI products that are not only innovative but also fair and inclusive, ultimately fostering a more just technological landscape.
By focusing on transparency, inclusivity, and community involvement, we can mitigate bias and build AI systems that truly benefit all users. Let’s continue to push for fairness in AI and create a future where technology serves everyone equally.
Sources: Ioana Teleanu https://www.interaction-design.org/courses/ai-for-designers/