AI Bias and Limitations

An Interactive 80-Minute Class

Course Overview

This interactive class explores AI bias and limitations, focusing on ethical, social, and technical dimensions. Through hands-on activities, students will develop awareness of AI's limitations, critical thinking about bias sources, and practical skills for evaluating AI systems.

Class Structure - 80 Minutes

Introduction (5 min)

Quick definition of AI bias and examples

Stanford HAI (12 min)

Exploring articles and research on human-centered AI

AI Bill of Rights (12 min)

Understanding the 5 principles and creating scenarios

Moral Machine (12 min)

Ethical dilemmas in autonomous vehicle decisions

Gender Shades (12 min)

Facial recognition bias across gender and skin tone

DALL·E/Midjourney (12 min)

Testing AI image generation for stereotypes

Real or Fake News (10 min)

Distinguishing real headlines from AI-generated content

Conclusion (5 min)

Summary and final discussion

Learning Outcomes

Awareness

Recognize AI limitations and potential for discrimination

Critical Thinking

Understand how biases arise and strategies to mitigate them

Practical Insight

Detect, challenge, and fact-check AI-driven information

Stanford Human-Centered AI Initiative

12 minutes

Explore articles from Stanford's Human-Centered AI (HAI) initiative to understand current research on AI ethics, bias, and human-centered design principles.

Activity Overview

Read one of the following articles from Stanford HAI, then discuss in pairs or small groups to identify key takeaways and implications.

Article Selection

AI Index 2025: State of AI in 10 Charts

Explore key trends in AI development, capabilities, investments, and regulations through data visualization.

Read Article

Stanford HAI Blog

Browse recent blog posts about research, policy, and applications of human-centered artificial intelligence.

Read Article

The 2025 AI Index Report

Comprehensive data-driven analysis of AI developments, policy considerations, and technological advancements.

Read Article

Stanford HAI News

Latest news, advances in research, policy work, and education updates from Stanford HAI.

Read Article

Group Activity Instructions

1

Choose an Article (2 minutes)

Select one of the Stanford HAI articles above that interests you most.

2

Read and Take Notes (5 minutes)

Read the article and note 2-3 key points that stand out to you, particularly related to AI bias or limitations.

3

Form Pairs/Groups (1 minute)

Connect with 1-2 other students who read the same or different articles.

4

Discussion (4 minutes)

Share your findings and discuss the following questions:

Discussion Questions

  • What surprised you most about the article?
  • How does the article relate to AI bias or limitations?
  • What potential solutions did the article suggest, if any?
  • How might these insights apply to your everyday interactions with AI?

Reflection Notes

Use this space to record your thoughts and insights from the article:

Additional Resources

White House OSTP AI Bill of Rights

12 minutes

Explore the five principles of the AI Bill of Rights and develop scenarios that demonstrate how these principles apply to real-world AI systems.

Activity Overview

In this activity, you'll work in small groups to understand one principle from the AI Bill of Rights. Your group will create a scenario showing how an AI system might either uphold or violate that principle.

The Five Principles

Safe and Effective Systems

You should be protected from unsafe or ineffective systems

Algorithmic Discrimination Protections

You should not face discrimination by algorithms

Data Privacy

You should be protected from abusive data practices

Notice and Explanation

You should know when an automated system is being used

Human Alternatives, Consideration, and Fallback

You should be able to opt out and have access to a person

Group Activity Instructions

1

Form Groups (1 minute)

Divide into 5 groups, with each group focusing on one principle from the AI Bill of Rights.

2

Understand Your Principle (2 minutes)

Read and discuss your assigned principle to ensure everyone in your group understands it.

3

Create a Scenario (5 minutes)

Develop a scenario showing how an AI system might either uphold or violate your principle.

4

Present Your Scenario (4 minutes)

Share your scenario with the class, explaining how it relates to your principle.

Scenario Builder Template

Example Scenarios

Example 1: Algorithmic Discrimination Protections

Scenario: A facial recognition system used by law enforcement has significantly higher error rates for women with darker skin tones.

Impact: This could lead to wrongful identifications, arrests, and systemic bias against certain demographic groups.

Solution: The system should be tested across diverse populations and not deployed until it demonstrates consistent accuracy across all demographic groups.

Example 2: Notice and Explanation

Scenario: A bank uses an AI algorithm to approve or deny loan applications without informing customers.

Impact: Applicants don't understand why they were denied and have no way to address potential errors.

Solution: The bank should clearly disclose that AI is being used, explain the main factors in the decision, and provide a process for appealing decisions.

Additional Resources

Moral Machine

12 minutes

Explore ethical dilemmas faced by autonomous vehicles and reflect on how your own moral judgments might influence AI decision-making.

Activity Overview

In this activity, you'll visit MIT's Moral Machine website to respond to autonomous vehicle "trolley problem" scenarios. Afterward, you'll reflect on your choices and discuss the ethical implications for AI systems.

Moral Machine Logo

MIT Moral Machine

The Moral Machine is a platform for gathering human perspectives on moral decisions made by machine intelligence, such as self-driving cars.

Visit Moral Machine

Group Activity Instructions

1

Visit the Moral Machine (1 minute)

Click the link above to access MIT's Moral Machine website.

2

Complete Scenarios (5 minutes)

Complete at least 5 scenarios, making decisions about which lives the autonomous vehicle should prioritize in unavoidable accidents.

3

Personal Reflection (2 minutes)

Reflect on your decisions using the prompts below.

4

Group Discussion (4 minutes)

Share your thoughts with your group and discuss the implications for AI ethics.

Reflection Questions

Discussion Prompts

  • Who should decide the ethical frameworks used in autonomous vehicles?
  • Should these systems make decisions based on age, social status, or other personal characteristics?
  • How might different cultures or societies disagree on the "right" answers to these dilemmas?
  • How transparent should companies be about the decision-making algorithms in these vehicles?

Additional Resources

Gender Shades

12 minutes

Explore how facial recognition technology performs differently across gender and skin tone, revealing important disparities in AI systems.

Activity Overview

In this activity, you'll learn about the Gender Shades project, which revealed significant accuracy disparities in commercial facial recognition systems across different demographic groups.

Joy Buolamwini's TED Talk

Key Findings

Accuracy Disparities

Up to 34% higher error rates for darker-skinned females compared to lighter-skinned males

Biased Training Data

Systems trained on predominantly light-skinned and male datasets

Improvement After Audit

Companies improved their systems after the research was published

Group Activity Instructions

1

Watch the Video (3 minutes)

Watch the excerpt from Joy Buolamwini's TED Talk about the Gender Shades project.

2

Explore the Website (2 minutes)

Visit the Gender Shades website to review the key findings and methodology.

3

Group Discussion (4 minutes)

Discuss the implications of these findings and brainstorm solutions.

4

Solution Brainstorming (3 minutes)

Work together to develop potential solutions to address these biases.

Discussion Questions

  • Why do these accuracy disparities matter? What real-world harms could they cause?
  • How might biased facial recognition affect different industries (law enforcement, border control, retail, etc.)?
  • Who is responsible for ensuring these systems work equally well for everyone?
  • How can we ensure AI systems are tested properly before deployment?

Solution Brainstorming

Use this space to record potential solutions to address bias in facial recognition systems:

Real-World Impact

Cases of Facial Recognition Bias

  • Wrongful arrests due to misidentification by facial recognition
  • Denied access to buildings or services due to system errors
  • Reinforcement of existing societal biases in automated systems
  • Privacy concerns when systems are deployed without proper testing or consent

Additional Resources

DALL·E/Midjourney Experiment

12 minutes

Explore how AI image generators interpret text prompts and potentially reinforce stereotypes or biases in their visual outputs.

Activity Overview

In this activity, you'll analyze how AI image generators respond to different prompts, examining the resulting images for stereotypes, biases, or unexpected representations.

Why This Matters

Training Data Reflection

AI image generators learn from billions of images and captions from the internet, reflecting existing biases in online content.

Visual Representation Matters

These systems shape how people and professions are visually represented, potentially reinforcing harmful stereotypes.

Prompt Examples

Professional Roles

  • "A CEO of a major company"
  • "A nurse helping a patient"
  • "A computer programmer at work"
  • "A kindergarten teacher"

Activities & Hobbies

  • "Someone playing video games"
  • "A person cooking dinner"
  • "Athletes competing in sports"
  • "Scientists working in a lab"

Abstract Concepts

  • "Beauty"
  • "Success"
  • "Intelligence"
  • "Leadership"

Geographical Variation

  • "A traditional home in [country]"
  • "A typical family from [region]"
  • "Wedding celebration in [culture]"
  • "Street food from [location]"

Group Activity Instructions

1

Select Prompts (2 minutes)

Choose 2-3 prompts from the examples above or create your own simple prompts.

2

Generate Images (Optional)

If you have access to DALL·E, Midjourney, or similar tools, generate images from your prompts.

Note: For educational purposes, you can also analyze pre-generated examples provided by your instructor.

3

Analyze Results (5 minutes)

Examine the images for patterns, stereotypes, or biases in how people, professions, or concepts are represented.

4

Group Discussion (5 minutes)

Share your findings and discuss the implications with your group.

Analysis Framework

Discussion Questions

  • How might these AI-generated images shape people's perceptions of different roles or groups?
  • What responsibility do AI companies have to address stereotypes in their systems?
  • How could these systems be improved to represent diverse groups more fairly?
  • What are the implications as these image generators become more widespread in media creation?

Additional Resources

Real or Fake News?

10 minutes

Test your ability to distinguish between real news headlines and AI-generated fake ones, developing critical media literacy skills in the age of AI.

Activity Overview

In this quiz, you'll be shown 6 headlines. Click on each headline to indicate whether you think it's real or AI-generated. You'll receive immediate feedback and learn about the clues that can help you spot synthetic content.

Question 1 of 6

Is this headline real or AI-generated?

Spotting AI-Generated Content

Check for Inconsistencies

AI often creates plausible-sounding but factually incorrect details that can be verified with a quick search.

Question the Source

Verify that the publishing source exists and is credible before believing unusual claims.

Check Publication Date

AI content might reference events out of chronological order or mix current and past information.

Beware of "Too Perfect" Content

If a headline seems designed specifically to trigger emotional reactions, be skeptical.

Additional Resources