Predictive policing uses computer systems to analyze large sets of data to guide law enforcement's decisions regarding the deployment of police or to identify individuals who may be more likely to commit or be a victim of a crime.
There are two main applications of predictive policing: the use of arrest data to identify areas with a high occurrence of crime and to determine the likelihood of a crime being committed by specific people. The latter use case is more controversial as it involves the acquisition of social media data to make estimates about particular individuals without a real evidence basis. The use of publicly available personal information opens up possibilities for more intrusive forms of predictive policing. The widespread availability of facial images enables more intrusive uses of CCTV cameras, and data on online behavior might lead to individual profiling and risk assessments.
Because of these concerns, predictive policing drew negative comparisons to the dystopian science fiction film Minority Report (2002), in which a similar system is misused by authorities to charge the protagonist with a crime he has not committed. PredPol (since renamed Geolitica) disputed the association of its predictive policing product with the film, claiming its solution is unlike the one presented in Minority Report as it does not focus on individuals.
Ezekiel Edwards, director of the Criminal Law Reform Project at the American Civil Liberties Union, made the following statement regarding the problems that predictive policing may present:
Broadly speaking, the quality of the data that police are using to make predictions raises significant concerns, [as does] the lack of transparency that the predictive policing programs seem to embrace, such that we don’t really know how they’re being used and how they’re constructed.
On August 31, 2016, a coalition of seventeen organizations, including the NAACP (National Association for the Advancement of Colored People), the American Civil Liberties Union, and Brennan Center for Justice, issued a statement critiquing predictive policing tools used by law enforcement in the United States, highlighting the technology’s capacity for racial bias, lack of transparency, and other flaws that may lead to misapplication of the law. More specifically, the document raised the following concerns:
- The lack of transparency regarding predictive policing systems prevents a meaningful, well-informed public debate.
- Predictive policing systems ignore community needs.
- Predictive policing systems threaten to undermine the constitutional rights of individuals.
- Predictive technologies are primarily being used to intensify enforcement rather than to address human needs.
- Police departments could use predictive tools to anticipate which officers may engage in misconduct, but most have not done so.
- Predictive policing systems have failed to monitor their racial impact.
In response, companies involved in the predictive policing market began to address the problem of bias in the practice. For instance, CivicScape developed a system that aims to identify any partial data and then either remove it or modify the result accordingly to adjust for bias. Additionally, the company intends to use machine learning algorithms to check for the presence of partial data periodically. These algorithms are then published on the GitHub platform and can be accessed freely by anyone.