Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 844: SafeRent AI Screening Tool Allegedly Discriminated Against Housing Voucher Applicants

Description: SafeRent’s AI-powered tenant screening tool used credit history and non-rental-related debts to assign scores, disproportionately penalizing Black and Hispanic renters and those using housing vouchers. The reported discriminatory housing outcomes violated the Fair Housing Act and Massachusetts law. A class action lawsuit (Louis, et al. v. SafeRent Solutions, et al.) resulted in a $2.275 million settlement and changes to SafeRent’s practices.
Editor Notes: Reconstructing the timeline of events: (1) May 25, 2022, a class action lawsuit was filed against SafeRent Solutions in the U.S. District Court for the District of Massachusetts. It alleged violations of the Fair Housing Act and state laws due to algorithmic discrimination against Black and Hispanic rental applicants using housing vouchers. (2) January 9, 2023, the U.S. Department of Justice and the Department of Housing and Urban Development filed a statement of interest supporting the case. (3) July 26, 2023, the court denied SafeRent’s motion to dismiss and ruled that the plaintiffs sufficiently alleged that SafeRent’s scoring system caused disparate impacts. (4) November 20, 2024, the court approved a $2.275 million settlement with injunctive relief to prohibit discriminatory tenant scoring practices, setting a national precedent for fair tenant screening.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: SafeRent Solutions developed an AI system deployed by Landlords, which harmed Renters , Massachusetts renters , Hispanic renters , Black renters , Mary Louis and Monica Douglas.

Incident Stats

Incident ID
844
Report Count
4
Incident Date
2022-05-25
Editors
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

Incident OccurrenceJustice Department Files Statement of Interest in Fair Housing Act Case Alleging Unlawful Algorithm-Based Tenant Screening Practices+2
AI landlord screening tool will stop scoring low-income tenants after discrimination suit
Justice Department Files Statement of Interest in Fair Housing Act Case Alleging Unlawful Algorithm-Based Tenant Screening Practices

Justice Department Files Statement of Interest in Fair Housing Act Case Alleging Unlawful Algorithm-Based Tenant Screening Practices

justice.gov

AI landlord screening tool will stop scoring low-income tenants after discrimination suit

AI landlord screening tool will stop scoring low-income tenants after discrimination suit

theverge.com

Louis, et al. v. SafeRent Solutions, et al.

Louis, et al. v. SafeRent Solutions, et al.

cohenmilstein.com

She didn’t get an apartment because of an AI-generated score – and sued to help others avoid the same fate

She didn’t get an apartment because of an AI-generated score – and sued to help others avoid the same fate

theguardian.com

Justice Department Files Statement of Interest in Fair Housing Act Case Alleging Unlawful Algorithm-Based Tenant Screening Practices
justice.gov · 2023

The Department of Justice and the Department of Housing and Urban Development (HUD) announced today that they filed a Statement of Interest to explain the Fair Housing Act's (FHA) application to algorithm-based tenant screening systems. The…

AI landlord screening tool will stop scoring low-income tenants after discrimination suit
theverge.com · 2024

SafeRent, an AI screening tool used by landlords, will no longer use AI-powered "scores" to evaluate whether someone using housing vouchers would make a good tenant. On Wednesday, US District Judge Angel Kelley issued final approval for a r…

Louis, et al. v. SafeRent Solutions, et al.
cohenmilstein.com · 2024

Overview

Mary Louis, Monica Douglas, and the Community Action Agency of Somerville, the Plaintiffs, allege that SafeRent Solutions, LLC, which provides tenant screening services to landlords and property owners, has been violating the Fair …

She didn’t get an apartment because of an AI-generated score – and sued to help others avoid the same fate
theguardian.com · 2024

Three hundred twenty-four. That was the score Mary Louis was given by an AI-powered tenant screening tool. The software, SafeRent, didn’t explain in its 11-page report how the score was calculated or how it weighed various factors. It didn’…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
Northpointe Risk Models

Northpointe Risk Models

May 2016 · 15 reports
HUD charges Facebook with enabling housing discrimination

HUD charges Facebook with enabling housing discrimination

Aug 2018 · 4 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
Northpointe Risk Models

Northpointe Risk Models

May 2016 · 15 reports
HUD charges Facebook with enabling housing discrimination

HUD charges Facebook with enabling housing discrimination

Aug 2018 · 4 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1ce4c40