Michigan local government leaders report significant increases in police surveillance technology, uncertainty about AI introduction
This report presents the assessments of Michigan’s local government leaders, local chiefs of police, and county sheriffs on the use and value of a range of law enforcement equipment and technology such as body and dashboard cameras, drones, automated license plate readers, and facial recognition. In addition, these local leaders, plus county prosecutors, were asked about their views on the use of automated tools such as AI and machine learning in criminal justice work. These findings are based on statewide surveys of local government leaders in the Spring 2024 wave of the Michigan Public Policy Survey (MPPS), with some comparisons to data collected in the Fall 2015 MPPS wave.
Key findings
- Among Michigan communities that fund their own police departments or sheriffs’ offices, local officials report significant increases in the use of cameras and surveillance technology between 2015 and 2024.
- When it comes to newer technologies, 26% of sheriffs and police chiefs statewide indicate their agency currently uses automated license plate readers, 10% report use of facial recognition technology, and 3% report use of AI or other predictive tools for policing.
- Among jurisdictions that currently use these policing technologies, almost all local leaders agree that each is a worthwhile investment for their communities. However, local government officials are generally less likely to “strongly agree” compared with law enforcement leaders.
- When it comes to confidence in predictive policing tools such as AI and machine learning, over half (55%) of local government officials say they are unsure if assessments made by automated tools are more or less accurate than those made by humans. Uncertainty is even higher among sheriffs and police chiefs (59%) and county prosecutors (66%).
- When elected county prosecutors were asked about specific uses of AI tools in their offices’ work, 50% said they at least somewhat trust AI applications designed to identify high-risk neighborhoods, while 45% at least somewhat trust AI tools for processing and analyzing forensic evidence. Just 20% report any trust AI’s capacity to conduct risk assessments for sentencing or offers of parole, probation, and release.