インシデント 844: translated-ja-SafeRent AI Screening Tool Allegedly Discriminated Against Housing Voucher Applicants
概要: translated-ja-SafeRent’s AI-powered tenant screening tool used credit history and non-rental-related debts to assign scores, disproportionately penalizing Black and Hispanic renters and those using housing vouchers. The reported discriminatory housing outcomes violated the Fair Housing Act and Massachusetts law. A class action lawsuit (Louis, et al. v. SafeRent Solutions, et al.) resulted in a $2.275 million settlement and changes to SafeRent’s practices.
Editor Notes: Reconstructing the timeline of events: (1) May 25, 2022, a class action lawsuit was filed against SafeRent Solutions in the U.S. District Court for the District of Massachusetts. It alleged violations of the Fair Housing Act and state laws due to algorithmic discrimination against Black and Hispanic rental applicants using housing vouchers. (2) January 9, 2023, the U.S. Department of Justice and the Department of Housing and Urban Development filed a statement of interest supporting the case. (3) July 26, 2023, the court denied SafeRent’s motion to dismiss and ruled that the plaintiffs sufficiently alleged that SafeRent’s scoring system caused disparate impacts. (4) November 20, 2024, the court approved a $2.275 million settlement with injunctive relief to prohibit discriminatory tenant scoring practices, setting a national precedent for fair tenant screening.
Alleged: SafeRent Solutions developed an AI system deployed by Landlords, which harmed Renters , Massachusetts renters , Hispanic renters , Black renters , Mary Louis と Monica Douglas.
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
1.1. Unfair discrimination and misrepresentation
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Discrimination and Toxicity
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional