Friday, October 31, 2025

AI and crime prevention: Drawing the ethical line


Transmitted electrons, processed signals, algorithmic pattern recognition
- the quiet machinery of our new AI reality

by Gregory Saville

Over the past two years, I’ve written in this space about the risks and promises of artificial intelligence in community safety. In The Pros and Cons of Using AI to Prevent Crime and Stop, Dave, I’m Afraid: The Latest on AI and CPTED, I explored the tension between innovation and oversight. And in Gambling with the Future, I warned that without guardrails, predictive systems could amplify bias faster than any police algorithm before them.

This month, that conversation moves from theory to substance. I am about to release what may be the first field-ready ethical framework for artificial intelligence in CPTED/crime prevention for the International CPTED Association. 

This new AI and CPTED White Paper is the product of research, discussions and interviews as part of the Praxis/Theory CPTED Committee of the ICA. I solicited feedback from CPTED and artificial intelligence specialists from around the world. The result sets out principles for transparency, accountability, and human-centered design in the age of intelligent machines.

The literature review included reading the latest writers on AI, among those
historian Yuval Noah Harari's exceptional book Nexus

Why now? Because AI technology has already arrived: 

  • City cameras now run on neural networks that detect “anomalies” using predictive AI. 
  • Drone patrols and risk dashboards mine enormous datasets for facial recognition. 
  • Planners and urban designers are using generative AI to digitally simulate community planning scenarios, what is called digital twins. 
  • Some futurists are envisioning “smart cities” using a concept developed by Mateja and myself called 3rd Generation CPTED   
  • The concept of “smart cities”, a city driven by AI algorithms, already poses enormous challenges for crime and CPTED, a point I made at a 2021 Smart City conference presentation in Sweden  

What has not arrived are the ethical guidelines to match that power.

GENERATIVE AND PREDICTIVE AI

During my research I spoke to Professor Emma Pierson, a brilliant AI ethics scholar at the University of California, Berkeley, who reminds us that public debate around AI often drifts into abstraction. She urges policymakers to start with two foundational forms—predictive and generative AI—because nearly every current application stems from one or both. 

Predictive models infer patterns from data; generative models create new content from learned representations. Everything else including robotic, agentive, or hybrid models of AI builds on those foundations.

Drones are not  AI, but there are many crime prevention and policing 
applications where they lend themselves to AI 

That insight shapes this white paper. We focus first on how predictive systems are reshaping surveillance and resource allocation, and how generative tools could soon influence public messaging, architectural design, or even neighborhood storytelling. Each domain carries profound implications for privacy, accountability, and equity.

In crime prevention, ethical AI isn’t about the gadgets. It’s about governance. A predictive dashboard that flags “high-risk” behavior might block or respond to actions of people without community consent. That violates the very democratic principles of CPTED. A generative model that drafts neighborhood improvement plans without residents’ input is just as misguided. The new framework calls for three essential commitments:

  • Transparency: every AI-driven decision in urban safety must be explainable to the public it affects.
  • Oversight: humans remain accountable for outcomes; algorithms can advise but never decide.
  • Co-creation: residents are partners in design, not passive data points in someone else’s experiment.

This isn’t theoretical. The purpose of a white paper is to generate discussion within the ICA and elsewhere. It forms some of the factual background to launch deliberations. ICA members from Europe, South America, Asia, Africa, Australasia, and North America will have an AI framework to examine real-world cases where technology overstepped its reach.

 

The UN is now publishing ethical guidelines of AI usage

The paper describes some case studies, such as an intelligent lighting system that quietly profiled behavior by race and age. In another example, predictive policing software displaced trust in neighborhood problem-solving teams. These examples remind us that the ethics of AI are not a luxury. They are a public-safety necessity.

In a recent podcast with ICA President Macarena Rau Vargas, we discussed how ethical AI could strengthen community resilience.  

When designed within CPTED’s 1st Gen principles of territorial ownership, 2nd Gen principles of community cohesion, and 3rd Gen principles of sustainability and participation, we discover a version of AI that can illuminate, not dominate, public space.

The white paper concludes with a call to action. It challenges practitioners, researchers, and civic leaders to adopt a human-in-the-loop standard. AI can process information, but it cannot define meaning. That responsibility belongs to us. As Professor Pierson reminds us, the goal is not to slow innovation but to anchor it in accountability.

Next year, we will release our new SafeGrowth® book, co-authored by Mateja Mihinjac, Jason Tudor, Carl Bray, and myself. It offers detailed examples of success, candid lessons from failure, and a full chapter on a smart city initiative in Sweden that points toward the future.

After years of urging that crime prevention needs an ethical compass, we finally have both the foundation and the language to chart one. The next step belongs to everyone — planners, designers, police, community members, and policymakers — to draw the ethical line and keep it visible.



Leave a comment

Please add comments to SafeGrowth. I will post everyone except posts with abusive, off-topic, or offensive language; any discriminatory, racist, sexist or homophopic slurs; thread spamming; or ad hominem attacks.

If your comment does not appear in a day due to blogspot problems send it to safegrowth.office@gmail.com and we'll post direct.