Description: In December 2020, an Italian court ruled that Deliveroo’s employee ‘reliability’ algorithm illegally discriminated against workers with legitimate reasons for cancelling shifts.
Entities
View all entitiesAlleged: Deliveroo developed and deployed an AI system, which harmed Deliveroo workers with legitimate reasons for cancelling shifts and Deliveroo workers.
CSETv0 Taxonomy Classifications
Taxonomy DetailsProblem Nature
Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
Specification
Physical System
Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
Software only
Level of Autonomy
The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
High
Nature of End User
"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
Amateur
Public Sector Deployment
"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
No
Data Inputs
A brief description of the data that the AI system(s) used or were trained on.
employee activity history, shift schedules
CSETv1 Taxonomy Classifications
Taxonomy DetailsIncident Number
The number of the incident in the AI Incident Database.
94
Incident Reports
Reports Timeline
techcrunch.com · 2021
- View the original report at its source
- View the report at the Internet Archive
A court in Italy has dealt a blow to unalloyed algorithmic management after a legal challenge brought by three unions. The Bologna court ruled that a reputational-ranking algorithm used by on-demand food delivery platform Deliveroo discrimi…
vice.com · 2021
- View the original report at its source
- View the report at the Internet Archive
An algorithm used by the popular European food delivery app Deliveroo to rank and offer shifts to riders is discriminatory, an Italian court ruled late last week, in what some experts are calling a historic decision for the gig economy. The…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.