Discrimination

February 24, 2026
4 min read
Explore discrimination in AI, how biased algorithms impact fairness, and why ethical AI development is crucial to ensuring equity and inclusivity.

Definition

The unequal treatment of individuals or groups in AI outputs based on factors such as race, gender, or socioeconomic status.

Discrimination in AI

Discrimination in artificial intelligence occurs when automated systems treat individuals differently based on their membership in certain groups, leading to unfair or harmful outcomes. Although algorithms are often assumed to be neutral, real-world examples show that AI systems can reflect and amplify existing social biases. As AI becomes more embedded in areas such as recruitment, lending, and public services, these risks become more significant.

Discriminatory outcomes in AI usually emerge from a combination of historical data, design decisions, and social context rather than from the algorithm alone.

What is discrimination?

In a moral sense, discrimination involves differential treatment based on perceived membership in a socially salient group, where that treatment causes social harm. Not all differences in treatment are discriminatory. For example, excluding individuals from activities they cannot safely perform is generally considered reasonable.

Discrimination becomes ethically problematic when it relates to characteristics that have historically structured access to opportunity and dignity, such as gender, race, age, disability, or language.

How discrimination appears in AI systems

Language models

Word embeddings used in natural language processing learn associations from large text corpora. Research has shown that these models can encode gender stereotypes, such as associating women with caregiving roles and men with professional roles.

Recruitment algorithms

An internal Amazon hiring tool trained on past CVs learned to penalise applications associated with women, reflecting the gender imbalance in historical hiring data. The system was ultimately abandoned.

Credit scoring systems

In Finland, an automated credit decision was found to discriminate based on gender, age, language, and location, with a tribunal ruling that the outcome would have differed for another demographic profile.

How discriminatory AI causes harm

Allocative harms affect access to resources, opportunities, or services, such as jobs, credit, income, or safety. Unequal exposure to risk, even before direct harm occurs, can also count as allocative harm.

Representational harms shape how groups are portrayed and understood. These include stereotyping, misrecognition, and treating certain identities as the unmarked norm, which can reinforce social hierarchies.

While allocative harms are often easier to identify, representational harms can quietly legitimise unfair treatment over time.

How bias enters AI systems

Bias can enter AI systems at multiple stages of development:

  • Non-representative data: Training data that over-represents some groups and under-represents others leads to uneven model performance.
  • Label bias: When imperfect proxies, such as arrests instead of crime, are used as labels, existing injustices are repeated and reinforced.
  • Cultural unawareness: Design assumptions, such as rigid rules for names or identities, can unintentionally exclude people who do not conform to dominant cultural norms.

These sources of bias highlight that AI systems are shaped by social and historical conditions, not just technical choices.

Why ethical AI development matters

AI systems are not neutral tools. They reflect the values, assumptions, and constraints of the societies that produce them. Without deliberate safeguards, AI can entrench discrimination rather than reduce it.

Ethical AI development therefore requires examining the full lifecycle of a system, including data collection, design, deployment, and ongoing use, to ensure that automated decisions promote fairness, equity, and inclusivity rather than undermine them.

Key Takeaways

  • AI systems can discriminate by reproducing existing social biases.
  • Discrimination in AI can cause both unequal access to resources and harmful representations of groups.
  • Bias often enters AI through data, labels, and design assumptions.
  • AI is not neutral and reflects the context in which it is developed.
  • Ethical AI development is essential to support fairness and inclusivity.

Related Terms

No items found.