Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 830: Error-Prone AI Accessibility Tools Reportedly Lead to Navigation Issues for Blind Internet Users

Description: AI-powered accessibility overlays on websites frequently mislabel or misinterpret content, in turn complicating navigation for blind users and others with disabilities. Users report that the AI tools interfere with screen readers and mislead them with inaccurate descriptions. The reported unreliability in these tools have prompted legal action, as the companies behind them seek compliance with accessibility laws.
Editor Notes: This incident ID is a collective incident ID. Reconstructing the timeline of events: (1) 2019-2023 saw a significant increase in the use of AI-powered accessibility tools. (2) In 2023, in the U.S. over 4,500 lawsuits were filed against companies’ weak compliance with accessibility regulations. (3) The Financial Times published their investigation on April 7, 2024.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: EqualWeb , UserWay and Developers of AI-based accessibility tools developed an AI system deployed by Zara , Pemex , LVMH , Capita and Companies using AI-based accessibility tools, which harmed Blind people , Visually impaired people and Jakob Rosin.

Incident Stats

Incident ID
830
Report Count
1
Incident Date
2024-04-07
Editors
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

7.3. Lack of capability or robustness

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. AI system safety, failures, and limitations

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+1
Blind Internet Users Struggle With Error-Prone AI Aids
Blind Internet Users Struggle With Error-Prone AI Aids

Blind Internet Users Struggle With Error-Prone AI Aids

ft.com

Blind Internet Users Struggle With Error-Prone AI Aids
ft.com · 2024

Unreliable software installed to comply with rules to help disabled people navigate online has prompted thousands of lawsuits.

Jakob Rosin, a prominent member of Estonia’s blind community, recalled browsing a sports club website with the he…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Employee Automatically Terminated by Computer Program

Employee Automatically Terminated by Computer Program

Oct 2014 · 20 reports
Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Airbnb's Trustworthiness Algorithm Allegedly Banned Users without Explanation, and Discriminated against Sex Workers

Airbnb's Trustworthiness Algorithm Allegedly Banned Users without Explanation, and Discriminated against Sex Workers

Jul 2017 · 6 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Employee Automatically Terminated by Computer Program

Employee Automatically Terminated by Computer Program

Oct 2014 · 20 reports
Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Airbnb's Trustworthiness Algorithm Allegedly Banned Users without Explanation, and Discriminated against Sex Workers

Airbnb's Trustworthiness Algorithm Allegedly Banned Users without Explanation, and Discriminated against Sex Workers

Jul 2017 · 6 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76