Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 261: Robot Deployed by Animal Shelter to Patrol Sidewalks outside Its Office, Warding off Homeless People in San Francisco

Description: Society for the Prevention of Cruelty to Animals (SPCA) deployed a Knightscope robot to autonomously patrol the area outside its office and ward off homeless people, which was criticized by residents as a tool of intimidation and ordered by the city of San Francisco to stop its use on a public right-of-way.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Knightscope developed an AI system deployed by Society for the Prevention of Cruelty to Animals, which harmed San Francisco homeless people.

Incident Stats

Incident ID
261
Report Count
8
Incident Date
2017-11-15
Editors
Khoa Lam
Applied Taxonomies
GMF, MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

Incident OccurrenceSecurity robot that deterred homeless encampments in the Mission gets rebuke from the city+3
Robots are being used to shoo away homeless people in San Francisco
Crime-fighting robot retired after launching alleged ‘war on the homeless’Big Brother on wheels? Fired security robot divides local homeless peopleThe Tricky Ethics of Knightscope's Crime-Fighting Robots
Security robot that deterred homeless encampments in the Mission gets rebuke from the city

Security robot that deterred homeless encampments in the Mission gets rebuke from the city

bizjournals.com

Robots are being used to shoo away homeless people in San Francisco

Robots are being used to shoo away homeless people in San Francisco

qz.com

Security robots are being used to ward off San Francisco’s homeless population

Security robots are being used to ward off San Francisco’s homeless population

techcrunch.com

Animal shelter faces backlash after using robot to scare off homeless people

Animal shelter faces backlash after using robot to scare off homeless people

theverge.com

Security robot bullied and forced off the street in San Francisco

Security robot bullied and forced off the street in San Francisco

dezeen.com

Crime-fighting robot retired after launching alleged ‘war on the homeless’

Crime-fighting robot retired after launching alleged ‘war on the homeless’

washingtonpost.com

Big Brother on wheels? Fired security robot divides local homeless people

Big Brother on wheels? Fired security robot divides local homeless people

theguardian.com

The Tricky Ethics of Knightscope's Crime-Fighting Robots

The Tricky Ethics of Knightscope's Crime-Fighting Robots

wired.com

Security robot that deterred homeless encampments in the Mission gets rebuke from the city
bizjournals.com · 2017

San Francisco residents continue to rage against the machines.

While the city's Board of Supervisors moves toward finalizing limits on robots that roam the sidewalks to deliver food and goods, it must also find a way to handle security robo…

Robots are being used to shoo away homeless people in San Francisco
qz.com · 2017

The San Francisco branch of the Society for the Prevention of Cruelty to Animals (SPCA) has been ordered by the city to stop using a robot to patrol the sidewalks outside its office, the San Francisco Business Times reported Dec. 8.

The rob…

Security robots are being used to ward off San Francisco’s homeless population
techcrunch.com · 2017

Is it worse if a robot instead of a human is used to deter the homeless from setting up camp outside places of business?

One such bot cop recently took over the outside of the San Francisco SPCA, an animal advocacy and pet adoption clinic i…

Animal shelter faces backlash after using robot to scare off homeless people
theverge.com · 2017

An animal shelter in San Francisco has been criticized for using a robot security guard to scare off homeless people.

The San Francisco branch of the SPCA (the Society for the Prevention of Cruelty to Animals) hired a K5 robot built by Knig…

Security robot bullied and forced off the street in San Francisco
dezeen.com · 2017

A robot patrolling a street in San Francisco to ward off homeless people has been removed after complaints from locals, who also knocked it over and smeared it with feces.

The Knightscope K5 security robot was deployed by the San Francisco …

Crime-fighting robot retired after launching alleged ‘war on the homeless’
washingtonpost.com · 2017

Like so many classic Western anti-heroes before him, he rolled (literally) into town with a singular goal in mind: cleaning up the streets, which had become a gritty hotbed of harassment, vandalism, break-ins and grift.

The only difference …

Big Brother on wheels? Fired security robot divides local homeless people
theguardian.com · 2017

To some homeless people, San Francisco’s latest security robot was a rolling friend on five wheels that they called “R2-D2 Two”. To others living in tents within the droid’s radius, it was the “anti-homeless robot”.

For a month, the 400lb, …

The Tricky Ethics of Knightscope's Crime-Fighting Robots
wired.com · 2017

In November, the San Francisco SPCA deployed a 5-foot-tall, 400-pound robot to patrol its campus. Not for muscle, mind you, but for surveillance. The SPCA, a large complex nestled in the northeast corner of the city's Mission neighborhood, …

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Previous IncidentNext Incident

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 9d70fba