Showing posts with label Artificial intelligence. Show all posts
Showing posts with label Artificial intelligence. Show all posts

Wednesday, February 22, 2023

"Stop Dave... I'm afraid" - The latest on AI and crime prevention

AI-driven shape-recognition tech in the Smart City
- photo by Creative Commons courtesy of QueSera4710


by Gregory Saville

With apologies to the deactivation scene of the Hal 9000 computer in 2001: A Space Odyssey, the title above came to mind when I recently read a paper on AI. What is the latest in AI? Our work in SafeGrowth and CPTED often places us face-to-face with suggestions for artificial intelligence as an answer to crime. 

AI appears in this blog in the form of my keynote address to the ICA annual conference in Sweden regarding AI, Smart Cities and CPTED. I have blogged on this topic a number of times including AI in law enforcement and AI and CCTV

I recently read a provocative new research study on AI and Smart Cities. In the paper Understanding citizen perceptions of AI in the smart city, Finnish computer science researcher Anu Lehtiƶ and her colleagues examine public perceptions of AI in relation to Smart City urban planning. The Smart City movement is gaining momentum and it is intricately tied to AI.


RESEARCH FINDINGS

Here are some responses they uncovered:

I don’t need to worry about AI. I trust those who are in control of it. I’ve already seen digitization in other aspects of life. It isn’t so bad, maybe even good!

Some people are happy just letting things happen and they rely on “those who know better” to handle problems. This is an understandable position, given the speed of change today and the overwhelming amount of information flooding our screens.  However, given the catastrophe of other unfettered technologies (nuclear power, for example), this is a poorly thought-out and risky strategy.


Mapping the connectivity of the worldwide internet
- photo courtesy of Creative Commons, By The Opte Project 
  

I’ll avoid AI if I think there is a danger. 

It is difficult to anticipate all the potential dangers of AI. I presented some in my keynote address (see link above). But because AI arrives with so many advantages for us, especially in the Smart City, we tend to look the other way and ignore unanticipated consequences. But in truth, there are many ways where AI might go awry and sci-fi writers have offered some terrifying possibilities. Consider The Terminator, The Matrix, or the beautiful Ava in El Machina!

I don’t like “them” monitoring me. 

In this era of digitization, it is hard to imagine any modern society without extensive digital record-keeping. We are already thoroughly embedded into one database or another. There are motor vehicle databases, government databases, health services, pension, and social assistance databases, not to mention thousands of corporate databases, both virtual and in person, whenever we purchase something.

The fact is that we are already monitored. Further, with AI there is no “them”. Artificial intelligence is literally anywhere there is electricity, a processor, and some AI programming. The public, it seems, is blissfully unaware of the risks posed by AI. 


IT WILL BE NORMAL IN 50 YEARS

The most concerning findings:

The interviewees saw no reason to fret over something that would become a natural part of citizens lives, in time. The younger generation would deem current worries as unintelligible, having grown up in a society where the use of AI was the norm (female, 25). On the other hand, the interviewees presumed that it would still take a long time (“another 50 years” female, 44, personal assistant) before AI was mature enough to operate on a level that was notable enough to bring about significant, concrete changes in people’s everyday lives. 

In other words, don’t fret about AI because it will become normal and it’ll take 50 years before it matters. Talk about clueless bliss.

This is known as the AI effect - the tendency to claim AI is not real intelligence. If you have read anything on AI and its exponential growth you will agree that waiting around for AI to wake up is cavalier and dangerous. We need to be much more diligent.

Ultimately, it seems the researchers too are alarmed. They recommend a form of AI called human-centered AI which is a programming approach based on human ethics and the importance of personal privacy. Not quite Asimov’s 3 Laws of Robotics, but close enough for now.


Sunday, July 31, 2022

Defund the police? Absurd! Re-fund and reframe? Absolutely!


State Capitol Building, South Dakota. The Law Enforcement Training Academy is a national leader in police instructor certification in problem-based learning.
Photo Travel South Dakota

GUEST BLOG: Gerard Cleveland is a school and youth violence prevention expert and an attorney based in Australia. He is co-author of Swift Pursuit: A Career Survival Guide for the Federal Officer. He is a frequent contributor to this blog, most recently regarding policing and drones


PROBLEM-SOLVERS, NOT CALL RESPONDERS

No one serious about public safety would advocate for the abolishment of our police agencies. We need them in times of emergency, as well as to investigate and solve community crime and disorder problems. However, we do need to have a serious discussion about what we want our police agencies to focus on in the next few decades.

Greg Saville and I just finished teaching a two-week problem-solving class called Problem-Based Learning for Police Educators at the Law Enforcement Training Academy in South Dakota with a wonderful group of dedicated and talented police and public service participants. Much of the course focused on ‘what next’ and we had senior police and sheriff executives, graduates from our previous classes, visit to tell us that as our communities change, so too must our public service agencies.

During all our training courses, we challenge police and community leaders to answer some key questions they will face in the years ahead, two of which include the metaverse and artificial intelligence.


The theoretical and futuristic cyberspace called the "metaverse" poses powerful challenges to policing


THE METAVERSE

If you are serving in a public role – in any agency – what plans and training have you undertaken to deal with issues in the metaverse? As that virtual area of our lives grows and becomes part of our daily activities, what role will police need to take?  If you are not sure that you need to address this issue yet, consider how much catching up policing agencies had to do with the arrival of crime on the web – especially the dark web – only a few decades ago. We do not want to be in the same position of catching up with technology as the metaverse extends its reach into our daily lives.

As well, what does your team know about the enhanced capabilities of privately owned drones? Many of our class members had never considered that the new threat of crime may be delivered via mini drones to your neighbourhoods. Their experience with drones generally extended to using police drones to clear buildings or watch traffic patterns, but almost no planning had been done to deal with drones being used for nefarious purposes by criminals. Greg describes one of the high-crime hotspots where his team brought SafeGrowth programming but then learned that the neighbourhood gang used drones to monitor police patrols.


ARTIFICIAL INTELLIGENCE 

Finally, how does your agency plan to address the development and growth of Artificial Intelligence (AI)? While AI will provide positive support for us in so many ways in medicine, engineering, traffic control, predictive policing, and a multitude of other ways, how have you begun to prepare – as parts of Asia have, for AI attacks on our infrastructure, our computers and even the vehicles we drive and the machines we operate?  

If you find yourself scratching your head wondering, “what do I do next?” we have a suggestion. Firstly, form some small groups with your police and community members and investigate and discuss what you can expect in the next 10 years from the above developments. Secondly, and most importantly, train your people to be problem solvers and thinkers, not reactive, call responders.  

But that last sentence is much harder than it sounds. We’ve been trying to change police training for the past two decades with limited success. I suspect that unless we reframe and fund strategies to address future trends, our current model of warrior responder will suddenly be quite irrelevant except in limited circumstances in the late 2020s and beyond. 


Monday, October 25, 2021

AI vs CPTED at the 2021 ICA virtual conference


Facial recognition technology at a Chinese train station - photo Creative Commons

POST SCRIPT: SINCE THIS BLOG WAS POSTED, THIS KEYNOTE PRESENTATION WAS PUBLISHED BY THE ICA HERE.


by Gregory Saville

A man walks through a public plaza on a pleasant Sunday afternoon and passes by a CCTV. Minutes later he is arrested by police on suspicion of a crime that, in fact, he did not commit. The man is African American. and, unfortunately, facial recognition software on the CCTV is vulnerable to false positives 

A predictive policing algorithm sends police patrols to the same neighborhood for the sixth week in a row to prevent crimes that have not yet occurred. Based on mathematics from earthquake prediction, this algorithm is hardly the best model for predicting human behavior and crime. It has no way to know that residents of this disenfranchised neighborhood are utterly fed up with over-policing, especially when the police don’t actually do anything except show up in their patrols cars. 

I blogged on these stories earlier this year. 

The stories are real and they reflect real events. Unfortunately, according to experts, predictive policing algorithms have serious problems with over-policing minority areas. The Los Angeles Police Department is the latest agency to abandon their PredPol programs (they claim it is due to Covid). Similarly, scientists specializing in evaluation have also criticized facial recognition software. They claim it cannot accurately read facial characteristics of black men! 

These stories reflect the threat of introducing Artificial Intelligence into crime prevention. Thus far, at least with CPTED, things in the AI world are not going well.


The lure of AI in forensic detection and crime prediction


THE 2021 ICA CPTED CONFERENCE

On Nov 3, I will deliver a keynote address to the 2021 International CPTED Association virtual conference, hosted by Helsingborg, Sweden, the Safer Sweden Foundation, and the International CPTED Association. It will be the first ICA conference since the last pre-COVID event a few years ago. The topic of my keynote is Artificial Intelligence, Smart Cities, and CPTED – An existential threat to the ICA

Based on my own experience with a tech start-up company a decade ago, and an experiment with some predictive critical infrastructure CPTED software, I came upon some fascinating books on AI. One, in particular, AI 2041 by Kai-Fu Lee and Chen Quifan, describes how AI will infiltrate all aspects of urban life – health, transport, schools, entertainment, crime prevention, and safety.  They tell us there will be no part of the future city without AI. This is especially the case with the Smart City movement in which scientists and planners envision a city embedded with AI. 


Security tech has made inroads into the world of CPTED


THE SORCERER'S APPRENTICE  

What happens when AI systems go wrong? Artificial Intelligence is at the apex of new technologies and the implications for CPTED are significant.

AI is a potential threat of a higher order. It is a case of the Sorcerer’s Apprentice: An independent system that analyses problems and makes decisions using machine learning instructions independent from us. But when things go inevitably wrong, we end up scrambling like mad to stop the damage from unintended consequences (eg:  false arrests and over-policing). 

If you’re interested in this topic in more detail, come to the 2021 ICA CPTED CONFERENCE, which runs from Nov 2 – 4 as a virtual conference. The dynamic conference program has dozens of sessions on crime prevention and CPTED from around the world. My keynote runs on Nov 3 at 9:20 – 9:45 PM Central European Time (1:20 – 1:45 AM Mountain Time). A recording of the conference for registrants will be made available for later watching for those who are asleep in their time zones. POST SCRIPT; THIS PRESENTATION IS HERE.