Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 335: UK Visa Streamline Algorithm Allegedly Discriminated Based on Nationality

Description: UK Home Office's algorithm to assess visa application risks explicitly considered nationality, allegedly caused candidates to face more scrutiny and discrimination.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: UK Visas and Immigration and UK Home Office developed an AI system deployed by UK Visas and Immigration, which harmed UK visa applicants from some countries.

Incident Stats

Incident ID
335
Report Count
8
Incident Date
2015-03-01
Editors
Khoa Lam
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

Incident OccurrenceLegal action to challenge Home Office use of secret algorithm to assess visa applications+1
AI system for granting UK visas is biased, rights groups claim
+4
Update: papers filed for judicial review of the Home Office’s visa algorithm
Legal action to challenge Home Office use of secret algorithm to assess visa applications

Legal action to challenge Home Office use of secret algorithm to assess visa applications

foxglove.org.uk

AI system for granting UK visas is biased, rights groups claim

AI system for granting UK visas is biased, rights groups claim

theguardian.com

The use of Artificial Intelligence by the Home Office to stream visa applications

The use of Artificial Intelligence by the Home Office to stream visa applications

kingsleynapley.co.uk

Update: papers filed for judicial review of the Home Office’s visa algorithm

Update: papers filed for judicial review of the Home Office’s visa algorithm

foxglove.org.uk

UK commits to redesign visa streaming algorithm after challenge to 'racist' tool

UK commits to redesign visa streaming algorithm after challenge to 'racist' tool

techcrunch.com

Home Office says it will abandon its racist visa algorithm - after we sued them

Home Office says it will abandon its racist visa algorithm - after we sued them

foxglove.org.uk

Home Office drops 'racist' algorithm from visa decisions

Home Office drops 'racist' algorithm from visa decisions

bbc.com

We won! Home Office to stop using racist visa algorithm

We won! Home Office to stop using racist visa algorithm

jcwi.org.uk

Legal action to challenge Home Office use of secret algorithm to assess visa applications
foxglove.org.uk · 2017

It has come to light that the Home Office is using a secretive algorithm, which it describes as digital “streaming tool,” to sift visa applications. So far they have refused to disclose much information about how the algorithm works, hiding…

AI system for granting UK visas is biased, rights groups claim
theguardian.com · 2019

Immigrant rights campaigners have begun a ground-breaking legal case to establish how a Home Office algorithm that filters UK visa applications actually works.

The challenge is the first court bid to expose how an artificial intelligence pr…

The use of Artificial Intelligence by the Home Office to stream visa applications
kingsleynapley.co.uk · 2019

The growth of technology has brought a great deal of efficiency and security to almost all organisations and businesses. But such progress may have taken a slightly wrong turn as the reliance on artificial intelligence by the Home Office as…

Update: papers filed for judicial review of the Home Office’s visa algorithm
foxglove.org.uk · 2020

Foxglove is supporting the Joint Council for the Welfare of Immigrants (JCWI) to challenge the Home Office’s use of a secret algorithm to sift visa applications, which it describes as a digital “streaming tool”.

We share JCWI’s concerns tha…

UK commits to redesign visa streaming algorithm after challenge to 'racist' tool
techcrunch.com · 2020

The U.K. government is suspending the use of an algorithm used to stream visa applications after concerns were raised the technology bakes in unconscious bias and racism.

The tool had been the target of a legal challenge. The Joint Council …

Home Office says it will abandon its racist visa algorithm - after we sued them
foxglove.org.uk · 2020

Home Office lawyers wrote to us yesterday, to respond to the legal challenge which we’ve been working on with the Joint Council for the Welfare of Immigrants (JCWI). 

We were asking the Court to declare the streaming algorithm unlawful, and…

Home Office drops 'racist' algorithm from visa decisions
bbc.com · 2020

The Home Office has agreed to stop using a computer algorithm to help decide visa applications after allegations that it contained "entrenched racism".

The Joint Council for the Welfare of Immigrants (JCWI) and digital rights group Foxglove…

We won! Home Office to stop using racist visa algorithm
jcwi.org.uk · 2020

We are delighted to announce that the Home Office has agreed to scrap its 'visa streaming' algorithm, in response to legal action we launched with tech-justice group Foxglove.

From Friday, 7 August, Home Secretary Priti Patel will suspend t…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities

Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities

Oct 2019 · 6 reports
Facial Recognition Trial Performed Poorly at Notting Hill Carnival

Facial Recognition Trial Performed Poorly at Notting Hill Carnival

Aug 2017 · 4 reports
Tinder's Personalized Pricing Algorithm Found to Offer Higher Prices for Older Users

Tinder's Personalized Pricing Algorithm Found to Offer Higher Prices for Older Users

Mar 2015 · 4 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities

Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities

Oct 2019 · 6 reports
Facial Recognition Trial Performed Poorly at Notting Hill Carnival

Facial Recognition Trial Performed Poorly at Notting Hill Carnival

Aug 2017 · 4 reports
Tinder's Personalized Pricing Algorithm Found to Offer Higher Prices for Older Users

Tinder's Personalized Pricing Algorithm Found to Offer Higher Prices for Older Users

Mar 2015 · 4 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76