Skip to main content

How AI reinforces gender bias—and what we can do about it

Academic Paper / Article

Back
May 22, 2025

How AI reinforces gender bias—and what we can do about it

Source: UN Women

Artificial intelligence (AI) is transforming our world—but when it reflects existing biases, it can reinforce discrimination against women and girls. From hiring decisions to healthcare diagnoses, AI systems can amplify gender inequalities when trained on biased data. So how can we ensure AI is ethical and inclusive? Zinnya del Villar, a leading expert in responsible AI, shares insights on the challenges and solutions in a recent conversation with UN Women.

What is  AI gender bias and why does it matter?

“AI systems, learning from data filled with stereotypes, often reflect and reinforce gender biases,” says Zinnya del Villar. “These biases can limit opportunities and diversity, especially in areas like decision-making, hiring, loan approvals, and legal judgments.”

At its core, AI is about data. It is a set of technologies that enable computers to do complex tasks faster than humans. AI systems, such as machine learning models, learn to perform these tasks from the data they are trained on. When these models rely on biased algorithms, they can reinforce existing inequalities and fuel gender discrimination in AI. 

Imagine, training a machine to make hiring decisions by showing it examples from the past. If most of those examples carry conscious or unconscious bias – for example, showing men as scientists and women as nurses – the AI may interpret that men and women are better suited for certain roles and make biased decisions when filtering applications.

This is called AI gender bias— when the AI treats people differently on the basis of their gender, because that’s what it learned from the biased data it was trained on.

Full article published by UN Women on 5 February 2025.

 

Resource type
Focus areas
Partner
UN Women

Artificial intelligence (AI) is transforming our world—but when it reflects existing biases, it can reinforce discrimination against women and girls. From hiring decisions to healthcare diagnoses, AI systems can amplify gender inequalities when trained on biased data. So how can we ensure AI is ethical and inclusive? Zinnya del Villar, a leading expert in responsible AI, shares insights on the challenges and solutions in a recent conversation with UN Women.

What is  AI gender bias and why does it matter?

“AI systems, learning from data filled with stereotypes, often reflect and reinforce gender biases,” says Zinnya del Villar. “These biases can limit opportunities and diversity, especially in areas like decision-making, hiring, loan approvals, and legal judgments.”

At its core, AI is about data. It is a set of technologies that enable computers to do complex tasks faster than humans. AI systems, such as machine learning models, learn to perform these tasks from the data they are trained on. When these models rely on biased algorithms, they can reinforce existing inequalities and fuel gender discrimination in AI. 

Imagine, training a machine to make hiring decisions by showing it examples from the past. If most of those examples carry conscious or unconscious bias – for example, showing men as scientists and women as nurses – the AI may interpret that men and women are better suited for certain roles and make biased decisions when filtering applications.

This is called AI gender bias— when the AI treats people differently on the basis of their gender, because that’s what it learned from the biased data it was trained on.

Full article published by UN Women on 5 February 2025.

 

Resource type
Focus areas
Partner
UN Women